Jan 21 10:36:54 crc systemd[1]: Starting Kubernetes Kubelet... Jan 21 10:36:54 crc restorecon[4611]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:54 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 21 10:36:55 crc restorecon[4611]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 21 10:36:55 crc kubenswrapper[4745]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.749992 4745 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753172 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753266 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753314 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753358 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753408 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753457 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753506 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753614 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753680 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753724 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753816 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753868 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753911 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.753952 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754019 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754082 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754126 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754181 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754239 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754303 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754353 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754396 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754437 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754477 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754563 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754611 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754663 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754705 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754745 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754796 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754875 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754934 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.754979 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755031 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755075 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755116 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755158 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755207 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755269 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755336 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755395 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755439 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755491 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755603 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755689 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755751 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755807 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755857 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755918 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.755980 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756035 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756091 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756146 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756198 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756256 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756299 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756347 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756397 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756455 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756514 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756590 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756637 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756680 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756723 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756764 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756806 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756852 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756894 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756935 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.756975 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.757022 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757434 4745 flags.go:64] FLAG: --address="0.0.0.0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757505 4745 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757597 4745 flags.go:64] FLAG: --anonymous-auth="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757649 4745 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757697 4745 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757747 4745 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757795 4745 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757843 4745 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757887 4745 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757940 4745 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.757986 4745 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758030 4745 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758073 4745 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758115 4745 flags.go:64] FLAG: --cgroup-root="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758157 4745 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758200 4745 flags.go:64] FLAG: --client-ca-file="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758247 4745 flags.go:64] FLAG: --cloud-config="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758291 4745 flags.go:64] FLAG: --cloud-provider="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758333 4745 flags.go:64] FLAG: --cluster-dns="[]" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758397 4745 flags.go:64] FLAG: --cluster-domain="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758447 4745 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758489 4745 flags.go:64] FLAG: --config-dir="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758570 4745 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758651 4745 flags.go:64] FLAG: --container-log-max-files="5" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758721 4745 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758785 4745 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758890 4745 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.758956 4745 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759014 4745 flags.go:64] FLAG: --contention-profiling="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759074 4745 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759136 4745 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759195 4745 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759258 4745 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759318 4745 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759380 4745 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759431 4745 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759491 4745 flags.go:64] FLAG: --enable-load-reader="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759578 4745 flags.go:64] FLAG: --enable-server="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759661 4745 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759725 4745 flags.go:64] FLAG: --event-burst="100" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759787 4745 flags.go:64] FLAG: --event-qps="50" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759850 4745 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759943 4745 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.759995 4745 flags.go:64] FLAG: --eviction-hard="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760048 4745 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760056 4745 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760062 4745 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760070 4745 flags.go:64] FLAG: --eviction-soft="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760076 4745 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760080 4745 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760087 4745 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760093 4745 flags.go:64] FLAG: --experimental-mounter-path="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760102 4745 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760108 4745 flags.go:64] FLAG: --fail-swap-on="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760113 4745 flags.go:64] FLAG: --feature-gates="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760119 4745 flags.go:64] FLAG: --file-check-frequency="20s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760125 4745 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760131 4745 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760136 4745 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760142 4745 flags.go:64] FLAG: --healthz-port="10248" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760147 4745 flags.go:64] FLAG: --help="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760152 4745 flags.go:64] FLAG: --hostname-override="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760157 4745 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760162 4745 flags.go:64] FLAG: --http-check-frequency="20s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760167 4745 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760172 4745 flags.go:64] FLAG: --image-credential-provider-config="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760177 4745 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760182 4745 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760186 4745 flags.go:64] FLAG: --image-service-endpoint="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760191 4745 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760196 4745 flags.go:64] FLAG: --kube-api-burst="100" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760201 4745 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760206 4745 flags.go:64] FLAG: --kube-api-qps="50" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760214 4745 flags.go:64] FLAG: --kube-reserved="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760219 4745 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760223 4745 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760228 4745 flags.go:64] FLAG: --kubelet-cgroups="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760233 4745 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760237 4745 flags.go:64] FLAG: --lock-file="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760242 4745 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760247 4745 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760253 4745 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760264 4745 flags.go:64] FLAG: --log-json-split-stream="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760276 4745 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760285 4745 flags.go:64] FLAG: --log-text-split-stream="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760292 4745 flags.go:64] FLAG: --logging-format="text" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760299 4745 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760309 4745 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760315 4745 flags.go:64] FLAG: --manifest-url="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760322 4745 flags.go:64] FLAG: --manifest-url-header="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760334 4745 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760342 4745 flags.go:64] FLAG: --max-open-files="1000000" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760351 4745 flags.go:64] FLAG: --max-pods="110" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760357 4745 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760362 4745 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760369 4745 flags.go:64] FLAG: --memory-manager-policy="None" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760375 4745 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760382 4745 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760389 4745 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760396 4745 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760417 4745 flags.go:64] FLAG: --node-status-max-images="50" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760423 4745 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760429 4745 flags.go:64] FLAG: --oom-score-adj="-999" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760435 4745 flags.go:64] FLAG: --pod-cidr="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760440 4745 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760452 4745 flags.go:64] FLAG: --pod-manifest-path="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760457 4745 flags.go:64] FLAG: --pod-max-pids="-1" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760463 4745 flags.go:64] FLAG: --pods-per-core="0" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760468 4745 flags.go:64] FLAG: --port="10250" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760475 4745 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760481 4745 flags.go:64] FLAG: --provider-id="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760486 4745 flags.go:64] FLAG: --qos-reserved="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760491 4745 flags.go:64] FLAG: --read-only-port="10255" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760497 4745 flags.go:64] FLAG: --register-node="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760503 4745 flags.go:64] FLAG: --register-schedulable="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760508 4745 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760520 4745 flags.go:64] FLAG: --registry-burst="10" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760546 4745 flags.go:64] FLAG: --registry-qps="5" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760554 4745 flags.go:64] FLAG: --reserved-cpus="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760560 4745 flags.go:64] FLAG: --reserved-memory="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760568 4745 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760574 4745 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760579 4745 flags.go:64] FLAG: --rotate-certificates="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760584 4745 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760590 4745 flags.go:64] FLAG: --runonce="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760595 4745 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760601 4745 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760606 4745 flags.go:64] FLAG: --seccomp-default="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760611 4745 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760616 4745 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760622 4745 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760628 4745 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760634 4745 flags.go:64] FLAG: --storage-driver-password="root" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760641 4745 flags.go:64] FLAG: --storage-driver-secure="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760646 4745 flags.go:64] FLAG: --storage-driver-table="stats" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760652 4745 flags.go:64] FLAG: --storage-driver-user="root" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760658 4745 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760664 4745 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760669 4745 flags.go:64] FLAG: --system-cgroups="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760674 4745 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760684 4745 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760689 4745 flags.go:64] FLAG: --tls-cert-file="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760695 4745 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760709 4745 flags.go:64] FLAG: --tls-min-version="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760715 4745 flags.go:64] FLAG: --tls-private-key-file="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760721 4745 flags.go:64] FLAG: --topology-manager-policy="none" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760726 4745 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760733 4745 flags.go:64] FLAG: --topology-manager-scope="container" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760739 4745 flags.go:64] FLAG: --v="2" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760748 4745 flags.go:64] FLAG: --version="false" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760763 4745 flags.go:64] FLAG: --vmodule="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760770 4745 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.760776 4745 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761002 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761009 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761015 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761021 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761025 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761030 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761035 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761039 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761044 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761048 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761053 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761059 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761066 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761073 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761078 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761083 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761088 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761093 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761099 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761103 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761108 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761112 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761117 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761121 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761127 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761132 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761137 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761143 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761151 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761158 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761165 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761170 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761175 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761182 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761187 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761191 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761196 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761200 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761204 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761209 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761213 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761218 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761222 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761226 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761233 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761238 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761243 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761247 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761252 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761256 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761263 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761273 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761283 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761294 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761304 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761314 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761325 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761335 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761345 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761354 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761368 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761381 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761392 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761406 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761418 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761430 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761441 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761451 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761463 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761472 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.761483 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.761519 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.770181 4745 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.770214 4745 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770303 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770314 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770320 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770327 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770336 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770342 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770348 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770353 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770359 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770364 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770370 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770375 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770380 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770385 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770391 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770396 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770401 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770408 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770414 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770419 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770425 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770432 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770439 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770445 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770451 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770458 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770463 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770469 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770474 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770480 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770486 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770494 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770500 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770507 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770517 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770528 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770554 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770561 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770567 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770573 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770580 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770586 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770593 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770600 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770607 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770613 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770620 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770626 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770633 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770638 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770643 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770649 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770657 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770663 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770668 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770674 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770679 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770684 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770689 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770694 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770699 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770704 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770710 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770715 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770721 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770726 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770731 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770736 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770741 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770747 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770753 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.770762 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770920 4745 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770931 4745 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770939 4745 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770948 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770954 4745 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770961 4745 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770968 4745 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770974 4745 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770980 4745 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770986 4745 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770992 4745 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.770997 4745 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771003 4745 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771008 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771013 4745 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771019 4745 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771024 4745 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771029 4745 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771037 4745 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771046 4745 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771053 4745 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771060 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771067 4745 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771073 4745 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771081 4745 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771088 4745 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771094 4745 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771100 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771108 4745 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771113 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771118 4745 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771124 4745 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771129 4745 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771134 4745 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771140 4745 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771146 4745 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771152 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771158 4745 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771163 4745 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771168 4745 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771174 4745 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771179 4745 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771184 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771189 4745 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771194 4745 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771200 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771205 4745 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771211 4745 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771216 4745 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771222 4745 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771227 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771232 4745 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771237 4745 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771242 4745 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771247 4745 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771252 4745 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771258 4745 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771264 4745 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771269 4745 feature_gate.go:330] unrecognized feature gate: Example Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771277 4745 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771282 4745 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771288 4745 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771294 4745 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771300 4745 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771305 4745 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771311 4745 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771316 4745 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771321 4745 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771326 4745 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771331 4745 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.771337 4745 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.771345 4745 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.771843 4745 server.go:940] "Client rotation is on, will bootstrap in background" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.774800 4745 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.774904 4745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.775571 4745 server.go:997] "Starting client certificate rotation" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.775605 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.776117 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-05 00:45:01.446439217 +0000 UTC Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.776223 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.784919 4745 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.792911 4745 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.807167 4745 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.814822 4745 log.go:25] "Validated CRI v1 runtime API" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.836954 4745 log.go:25] "Validated CRI v1 image API" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.841347 4745 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.843824 4745 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-21-10-31-04-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.843863 4745 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.860904 4745 manager.go:217] Machine: {Timestamp:2026-01-21 10:36:55.859359948 +0000 UTC m=+0.320147596 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:fa2b5303-0f9c-4975-b62d-81213d42789a BootID:8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:6f:3e:40 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:6f:3e:40 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b4:c5:07 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cb:2a:b4 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:aa:ca:6c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a4:03:4c Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ea:37:16:42:36:78 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:33:65:40:4e:a1 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.861370 4745 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.861648 4745 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.862408 4745 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.862851 4745 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.862987 4745 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.863351 4745 topology_manager.go:138] "Creating topology manager with none policy" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.863437 4745 container_manager_linux.go:303] "Creating device plugin manager" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.863753 4745 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.863880 4745 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.864140 4745 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.864712 4745 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.866564 4745 kubelet.go:418] "Attempting to sync node with API server" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.866700 4745 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.866804 4745 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.866895 4745 kubelet.go:324] "Adding apiserver pod source" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.866983 4745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.869069 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.869133 4745 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.869190 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.869523 4745 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.869426 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.870757 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873116 4745 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873778 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873806 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873817 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873825 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873840 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873848 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873857 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873871 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873881 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873891 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873904 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.873914 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.885688 4745 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.886579 4745 server.go:1280] "Started kubelet" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.887194 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.888108 4745 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.888037 4745 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 21 10:36:55 crc systemd[1]: Started Kubernetes Kubelet. Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.888948 4745 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.890452 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.890652 4745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.896764 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 03:39:51.586600909 +0000 UTC Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.896914 4745 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.897073 4745 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.896937 4745 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.896940 4745 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.897772 4745 factory.go:55] Registering systemd factory Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.897804 4745 factory.go:221] Registration of the systemd container factory successfully Jan 21 10:36:55 crc kubenswrapper[4745]: W0121 10:36:55.897811 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.897883 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.897967 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="200ms" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.898065 4745 factory.go:153] Registering CRI-O factory Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.898107 4745 factory.go:221] Registration of the crio container factory successfully Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.898197 4745 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.898227 4745 factory.go:103] Registering Raw factory Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.898260 4745 manager.go:1196] Started watching for new ooms in manager Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.899923 4745 manager.go:319] Starting recovery of all containers Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.899939 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.78:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188cb8af1c7d5265 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:36:55.886525029 +0000 UTC m=+0.347312627,LastTimestamp:2026-01-21 10:36:55.886525029 +0000 UTC m=+0.347312627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907173 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907453 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907685 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907756 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907821 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907890 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.907953 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908040 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908112 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908181 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908247 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908334 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908400 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908476 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908582 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908666 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908791 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908868 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.908957 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.909040 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.909120 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.909191 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.910211 4745 server.go:460] "Adding debug handlers to kubelet server" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.910998 4745 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911073 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911103 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911120 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911140 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911169 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911187 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911203 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911217 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911230 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911252 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911265 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911279 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911340 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911356 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911368 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911381 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911396 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911409 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911425 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911438 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911452 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911467 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911480 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911495 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911509 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911522 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911606 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911621 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911633 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911646 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911668 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911682 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911694 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911708 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911723 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911735 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911747 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911762 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911775 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911787 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911800 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911813 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911828 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911841 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911854 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911866 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911878 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911891 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911905 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911918 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911931 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911944 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911955 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911966 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911977 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.911989 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912001 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912013 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912025 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912036 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912046 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912059 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912070 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912080 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912092 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912105 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912117 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912130 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912142 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912163 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912174 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912185 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912197 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912212 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912223 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912239 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912251 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912263 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912274 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912288 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912308 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912319 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912335 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912350 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912362 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912375 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912389 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912401 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912414 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912428 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912439 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912451 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912462 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912474 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912486 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912498 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912510 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912542 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912556 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912567 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912578 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912590 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912602 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912614 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912625 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912639 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912650 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912666 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912679 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912690 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912701 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912736 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912747 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912760 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912770 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912781 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912792 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912804 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912815 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912826 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912835 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912847 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912858 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912868 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912880 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912890 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912901 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912912 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912923 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912933 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912944 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912957 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912968 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912977 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.912990 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913004 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913018 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913033 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913047 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913060 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913073 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913086 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913100 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913141 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913158 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913171 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913185 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913199 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913210 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913221 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913233 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.913244 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921295 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921377 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921419 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921444 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921466 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921493 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921513 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921567 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921589 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.921613 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922480 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922521 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922571 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922599 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922619 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922651 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922671 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922714 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922740 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922792 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922817 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922854 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922895 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922965 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.922999 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923037 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923057 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923080 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923106 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923192 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923215 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923243 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923265 4745 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923284 4745 reconstruct.go:97] "Volume reconstruction finished" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.923297 4745 reconciler.go:26] "Reconciler: start to sync state" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.929281 4745 manager.go:324] Recovery completed Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.938648 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.942387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.942449 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.942459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.943554 4745 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.943571 4745 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.943590 4745 state_mem.go:36] "Initialized new in-memory state store" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.995882 4745 policy_none.go:49] "None policy: Start" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.997297 4745 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.997374 4745 state_mem.go:35] "Initializing new in-memory state store" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.997290 4745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.997491 4745 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.998887 4745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.998934 4745 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 21 10:36:55 crc kubenswrapper[4745]: I0121 10:36:55.998964 4745 kubelet.go:2335] "Starting kubelet main sync loop" Jan 21 10:36:55 crc kubenswrapper[4745]: E0121 10:36:55.999017 4745 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.000512 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.000612 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.050984 4745 manager.go:334] "Starting Device Plugin manager" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051051 4745 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051069 4745 server.go:79] "Starting device plugin registration server" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051604 4745 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051627 4745 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051791 4745 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051929 4745 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.051939 4745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.061054 4745 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.099136 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="400ms" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.099191 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.099290 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100233 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100703 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100780 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.100964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101058 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101318 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101393 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101825 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.101914 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102230 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102286 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102343 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102755 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102781 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102794 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102897 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.102996 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103018 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103066 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103097 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103114 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103924 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.103935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.104259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.104278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.104287 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.104426 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.104445 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.105149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.105171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.105182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.153795 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.155564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.155629 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.155644 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.155684 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.156495 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.78:6443: connect: connection refused" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.225792 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.225851 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.225886 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.225917 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.225970 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226043 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226084 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226111 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226142 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226177 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226233 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226280 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.226386 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328322 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328404 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328430 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328450 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328482 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328502 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328521 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328562 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328565 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328604 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328711 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328736 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328748 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328782 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328741 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328800 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328808 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328842 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328827 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328890 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328874 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328926 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.328987 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.329042 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.329483 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.329511 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.329591 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.356895 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.358727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.358774 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.358787 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.358823 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.359379 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.78:6443: connect: connection refused" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.436594 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.456742 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.459913 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0e31869a46f98f61f0a51f4fa87e4a6537c38c577082353ecb58c23f2d7df0e2 WatchSource:0}: Error finding container 0e31869a46f98f61f0a51f4fa87e4a6537c38c577082353ecb58c23f2d7df0e2: Status 404 returned error can't find the container with id 0e31869a46f98f61f0a51f4fa87e4a6537c38c577082353ecb58c23f2d7df0e2 Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.467365 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.472907 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-f11e22de8bec29d09b77f7e9375247415325ce18db0414def0e6db3a52fbd9b4 WatchSource:0}: Error finding container f11e22de8bec29d09b77f7e9375247415325ce18db0414def0e6db3a52fbd9b4: Status 404 returned error can't find the container with id f11e22de8bec29d09b77f7e9375247415325ce18db0414def0e6db3a52fbd9b4 Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.485218 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c5c3fa7bdbe26c5a5caf1b93f0f326480de1800c53b9bdcbb83e0183f40bb904 WatchSource:0}: Error finding container c5c3fa7bdbe26c5a5caf1b93f0f326480de1800c53b9bdcbb83e0183f40bb904: Status 404 returned error can't find the container with id c5c3fa7bdbe26c5a5caf1b93f0f326480de1800c53b9bdcbb83e0183f40bb904 Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.489986 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.497946 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.500911 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="800ms" Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.516862 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b2a16b71f55dc8ceddd488c770da8c77866bc65aa19ee94d8567c4560edef7b2 WatchSource:0}: Error finding container b2a16b71f55dc8ceddd488c770da8c77866bc65aa19ee94d8567c4560edef7b2: Status 404 returned error can't find the container with id b2a16b71f55dc8ceddd488c770da8c77866bc65aa19ee94d8567c4560edef7b2 Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.519679 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-bb870d4ee7b4e413f8b8308491c2a8621f35e343199c9f77090a0912225c7162 WatchSource:0}: Error finding container bb870d4ee7b4e413f8b8308491c2a8621f35e343199c9f77090a0912225c7162: Status 404 returned error can't find the container with id bb870d4ee7b4e413f8b8308491c2a8621f35e343199c9f77090a0912225c7162 Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.759784 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.761545 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.761590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.761600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.761628 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.762141 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.78:6443: connect: connection refused" node="crc" Jan 21 10:36:56 crc kubenswrapper[4745]: W0121 10:36:56.831429 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:56 crc kubenswrapper[4745]: E0121 10:36:56.832030 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.888060 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:56 crc kubenswrapper[4745]: I0121 10:36:56.897442 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 16:13:40.062191258 +0000 UTC Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.004158 4745 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627" exitCode=0 Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.004287 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.004568 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bb870d4ee7b4e413f8b8308491c2a8621f35e343199c9f77090a0912225c7162"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.004721 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.011388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.011438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.011447 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.013414 4745 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879" exitCode=0 Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.013521 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.013627 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b2a16b71f55dc8ceddd488c770da8c77866bc65aa19ee94d8567c4560edef7b2"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.013759 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.014885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.014924 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.014936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.016066 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.016108 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c5c3fa7bdbe26c5a5caf1b93f0f326480de1800c53b9bdcbb83e0183f40bb904"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.018774 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5" exitCode=0 Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.018841 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.018868 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f11e22de8bec29d09b77f7e9375247415325ce18db0414def0e6db3a52fbd9b4"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.018951 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.019542 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.019572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.019583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.020586 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="89d498d4394db88bfe3773e6e728c2fe811002add82c940eb49aba447874ad72" exitCode=0 Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.020616 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"89d498d4394db88bfe3773e6e728c2fe811002add82c940eb49aba447874ad72"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.020763 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.020855 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0e31869a46f98f61f0a51f4fa87e4a6537c38c577082353ecb58c23f2d7df0e2"} Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.020995 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.021429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.021465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.021475 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.022102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.022119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.022128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: W0121 10:36:57.105167 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.105266 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.302022 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="1.6s" Jan 21 10:36:57 crc kubenswrapper[4745]: W0121 10:36:57.374833 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.374986 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:57 crc kubenswrapper[4745]: W0121 10:36:57.477330 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.478168 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.563000 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.568149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.568201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.568215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.568250 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.568986 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.78:6443: connect: connection refused" node="crc" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.813025 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:36:57 crc kubenswrapper[4745]: E0121 10:36:57.814343 4745 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.78:6443: connect: connection refused" logger="UnhandledError" Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.888420 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:57 crc kubenswrapper[4745]: I0121 10:36:57.897730 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 10:45:18.797171514 +0000 UTC Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.027649 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.027751 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.030318 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.033013 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d520e90165f213871aed2c9e150d4db483542588f87e1b4963888207bad740c5" exitCode=0 Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.033103 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d520e90165f213871aed2c9e150d4db483542588f87e1b4963888207bad740c5"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.033190 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.034542 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.034576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.034589 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.035586 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.038066 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a"} Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.038156 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.038878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.038918 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.038935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.888974 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.78:6443: connect: connection refused Jan 21 10:36:58 crc kubenswrapper[4745]: I0121 10:36:58.898020 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 11:50:42.144138675 +0000 UTC Jan 21 10:36:58 crc kubenswrapper[4745]: E0121 10:36:58.902749 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="3.2s" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.043741 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.043801 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.043861 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.044837 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.044876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.044890 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.047623 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.047760 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.048509 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.048545 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.048555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.051772 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.051822 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.053734 4745 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bc8a6a3f456efbf6a0510f473010bbc22268b8976ad06d1c285ab55da3d7264e" exitCode=0 Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.053775 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bc8a6a3f456efbf6a0510f473010bbc22268b8976ad06d1c285ab55da3d7264e"} Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.053928 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.054800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.054831 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.054845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.171291 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.172911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.172972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.172984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.173021 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.895453 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:36:59 crc kubenswrapper[4745]: I0121 10:36:59.898665 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 07:09:16.269898088 +0000 UTC Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.065207 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"389334a0748bf33f1149793ac6e39479a6ab7ab05e98ebda8ca55df9545a26f4"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.065298 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ef1a18d34b3efae9c9beef600970ceaff0df85af86ef545933b072e2eca390a9"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.065316 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7693d0bf7e1e5a868800e4259c283c4d5a62e03adeb784bd81370d7bfb23b483"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.065327 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6080632962ccd5c428d2d3e32e21e1d409b2069554cb7f8a238aa665b6ddcd74"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.069469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.069763 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885"} Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.069597 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.069627 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.069548 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.070198 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.071584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.071614 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.071624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072487 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.072708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.088697 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.192277 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.200162 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.206299 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:00 crc kubenswrapper[4745]: I0121 10:37:00.899632 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:16:53.118506911 +0000 UTC Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.077147 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c996eced8176073c12deb6a60aa19e78fd1ecf8fe4e94ea95e1c428304eec596"} Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.077235 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.077283 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.077244 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.077392 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.078704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.078761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.078777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.078958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.078998 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.079012 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.079777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.079835 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.079852 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.475715 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.871686 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 21 10:37:01 crc kubenswrapper[4745]: I0121 10:37:01.900789 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 11:57:38.072333303 +0000 UTC Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.080422 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.080494 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.080510 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082055 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082127 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.082423 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.170382 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 21 10:37:02 crc kubenswrapper[4745]: I0121 10:37:02.901602 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:48:36.425681134 +0000 UTC Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.083212 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.083212 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.084754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.084991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.085004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.085083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.085128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.085146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.346714 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.346997 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.348722 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.348767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.348784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:03 crc kubenswrapper[4745]: I0121 10:37:03.902096 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 16:25:22.260417777 +0000 UTC Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.604988 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.605162 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.606358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.606387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.606396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:04 crc kubenswrapper[4745]: I0121 10:37:04.902847 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 23:14:23.219164819 +0000 UTC Jan 21 10:37:05 crc kubenswrapper[4745]: I0121 10:37:05.903450 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:01:27.172885265 +0000 UTC Jan 21 10:37:06 crc kubenswrapper[4745]: E0121 10:37:06.061158 4745 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 21 10:37:06 crc kubenswrapper[4745]: I0121 10:37:06.903880 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 21:01:20.314232811 +0000 UTC Jan 21 10:37:07 crc kubenswrapper[4745]: I0121 10:37:07.904339 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:23:27.740029746 +0000 UTC Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.732489 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.732726 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.734343 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.734381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.734394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.749296 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:08 crc kubenswrapper[4745]: I0121 10:37:08.908902 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 08:49:38.198908718 +0000 UTC Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.109423 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.110378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.110502 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.110594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:09 crc kubenswrapper[4745]: E0121 10:37:09.174313 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.313511 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.313729 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.314876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.314933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.314952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.365200 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 21 10:37:09 crc kubenswrapper[4745]: W0121 10:37:09.625082 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.625266 4745 trace.go:236] Trace[1658901223]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:36:59.623) (total time: 10001ms): Jan 21 10:37:09 crc kubenswrapper[4745]: Trace[1658901223]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:37:09.625) Jan 21 10:37:09 crc kubenswrapper[4745]: Trace[1658901223]: [10.00156097s] [10.00156097s] END Jan 21 10:37:09 crc kubenswrapper[4745]: E0121 10:37:09.625302 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:37:09 crc kubenswrapper[4745]: W0121 10:37:09.848594 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.848690 4745 trace.go:236] Trace[1686759566]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:36:59.847) (total time: 10001ms): Jan 21 10:37:09 crc kubenswrapper[4745]: Trace[1686759566]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:37:09.848) Jan 21 10:37:09 crc kubenswrapper[4745]: Trace[1686759566]: [10.001353393s] [10.001353393s] END Jan 21 10:37:09 crc kubenswrapper[4745]: E0121 10:37:09.848712 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.888802 4745 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:37:09 crc kubenswrapper[4745]: I0121 10:37:09.909798 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 14:31:43.136375405 +0000 UTC Jan 21 10:37:10 crc kubenswrapper[4745]: W0121 10:37:10.057890 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.058030 4745 trace.go:236] Trace[265672958]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:37:00.055) (total time: 10002ms): Jan 21 10:37:10 crc kubenswrapper[4745]: Trace[265672958]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:37:10.057) Jan 21 10:37:10 crc kubenswrapper[4745]: Trace[265672958]: [10.002014018s] [10.002014018s] END Jan 21 10:37:10 crc kubenswrapper[4745]: E0121 10:37:10.058105 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.112363 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.113351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.113384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.113393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:10 crc kubenswrapper[4745]: W0121 10:37:10.142852 4745 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.144252 4745 trace.go:236] Trace[1861926235]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (21-Jan-2026 10:37:00.141) (total time: 10003ms): Jan 21 10:37:10 crc kubenswrapper[4745]: Trace[1861926235]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (10:37:10.142) Jan 21 10:37:10 crc kubenswrapper[4745]: Trace[1861926235]: [10.003124461s] [10.003124461s] END Jan 21 10:37:10 crc kubenswrapper[4745]: E0121 10:37:10.144300 4745 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.154412 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.193386 4745 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.193481 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.579694 4745 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.579818 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 21 10:37:10 crc kubenswrapper[4745]: I0121 10:37:10.910036 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 20:40:45.60009467 +0000 UTC Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.129015 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.130186 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.130234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.130244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.733563 4745 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.733666 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:37:11 crc kubenswrapper[4745]: I0121 10:37:11.911841 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:32:38.523577187 +0000 UTC Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.375025 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.377254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.377313 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.377330 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.377369 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:37:12 crc kubenswrapper[4745]: E0121 10:37:12.426063 4745 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.923354 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 14:59:46.077708042 +0000 UTC Jan 21 10:37:12 crc kubenswrapper[4745]: I0121 10:37:12.925783 4745 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:13 crc kubenswrapper[4745]: I0121 10:37:13.923696 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:44:05.536262026 +0000 UTC Jan 21 10:37:14 crc kubenswrapper[4745]: I0121 10:37:14.913945 4745 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:14 crc kubenswrapper[4745]: I0121 10:37:14.924481 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 16:04:34.238132821 +0000 UTC Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.163779 4745 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.198974 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.204126 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:15 crc kubenswrapper[4745]: E0121 10:37:15.586873 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.590430 4745 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.606248 4745 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.631483 4745 csr.go:261] certificate signing request csr-jm8w2 is approved, waiting to be issued Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.641418 4745 csr.go:257] certificate signing request csr-jm8w2 is issued Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.676204 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.776204 4745 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 21 10:37:15 crc kubenswrapper[4745]: W0121 10:37:15.777141 4745 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:37:15 crc kubenswrapper[4745]: E0121 10:37:15.777253 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events/crc.188cb8af1fd24f1c\": read tcp 38.129.56.78:45958->38.129.56.78:6443: use of closed network connection" event="&Event{ObjectMeta:{crc.188cb8af1fd24f1c default 26190 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:36:55 +0000 UTC,LastTimestamp:2026-01-21 10:36:56.103081674 +0000 UTC m=+0.563869272,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:37:15 crc kubenswrapper[4745]: W0121 10:37:15.777266 4745 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.913608 4745 apiserver.go:52] "Watching apiserver" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.925463 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 14:59:28.448062523 +0000 UTC Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.933637 4745 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.933938 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.934435 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.934723 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:15 crc kubenswrapper[4745]: E0121 10:37:15.934854 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.934791 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:15 crc kubenswrapper[4745]: E0121 10:37:15.935028 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.935085 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.935675 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:15 crc kubenswrapper[4745]: E0121 10:37:15.935825 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.935728 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.946802 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.947496 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.947624 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.948157 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.948165 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.948935 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.949059 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.949096 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:37:15 crc kubenswrapper[4745]: I0121 10:37:15.949334 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:15.998603 4745 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001001 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001042 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001074 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001098 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001124 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001150 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001175 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001199 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001227 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001256 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001277 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001299 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001342 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001365 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001390 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001412 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001443 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001467 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001489 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001510 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001537 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001604 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001652 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001678 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001729 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001753 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001778 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001820 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001843 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001867 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001891 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001940 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001963 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.001985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002018 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002041 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002063 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002086 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002093 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002126 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002240 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002272 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002299 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002325 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002344 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002361 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002378 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002397 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002413 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002471 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002495 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002532 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002628 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002653 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002879 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002908 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002936 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002963 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.002991 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003019 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003048 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003071 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003096 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003130 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003157 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003185 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003208 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003230 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003255 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003278 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003301 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003326 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003352 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003377 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003404 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003427 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003453 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003475 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003499 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003523 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003568 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003597 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003621 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003645 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003669 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003696 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003719 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003743 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003769 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003795 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003818 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003843 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003871 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003896 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003942 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003968 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.003992 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004019 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004036 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004052 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004071 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004087 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004104 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004128 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004150 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004175 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004198 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004223 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004248 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004275 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004302 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004327 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004350 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004375 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004398 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004418 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004454 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004471 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004489 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004517 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004620 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004644 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004665 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004687 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004710 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004735 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004759 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004780 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004837 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004866 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004889 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004917 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004978 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.004999 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005207 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005245 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005265 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005282 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005298 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005318 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005342 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005361 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005380 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005397 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005418 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005436 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005453 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005489 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005507 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005526 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005561 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005577 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005594 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005611 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005628 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005647 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005676 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005696 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005714 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005734 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005754 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005772 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005794 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005811 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005828 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005845 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005863 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005880 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005898 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005915 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005931 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005949 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005966 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.005984 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006001 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006020 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006039 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006058 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006077 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006095 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006115 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006134 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006152 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006170 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006187 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006207 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006228 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006300 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006342 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006370 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006392 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006416 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006445 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006496 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006516 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006536 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006573 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006611 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006633 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.006705 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.007210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.007532 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.008077 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.009107 4745 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.009164 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.009538 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.009739 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.011645 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.011926 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.012206 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.013175 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.012999 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.014380 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.014952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.016802 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.017139 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.017379 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.017617 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.018525 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.018952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.020281 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.022150 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.022297 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.030098 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.037957 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.038755 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.038835 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039040 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039190 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039385 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039245 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039293 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039590 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.039774 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040116 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040199 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040255 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040612 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040871 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.040956 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.041191 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.041571 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.041840 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.042020 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.042352 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.041582 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.042768 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.043786 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.043936 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.044226 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.044345 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.044688 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.044882 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.045114 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.045410 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.045481 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.045495 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.045665 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.051831 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.052152 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.052330 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.052791 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.052978 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.053160 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.053381 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.057005 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.057889 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.057909 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.058153 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.058409 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.061709 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.062107 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.062827 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.062830 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.063008 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.063374 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.063505 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.063935 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.063983 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064243 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064299 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064358 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064601 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064645 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064826 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.065132 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064644 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.065699 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.066088 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.066156 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.066268 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.066481 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:16.56638336 +0000 UTC m=+21.027170958 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.072722 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:16.572679183 +0000 UTC m=+21.033466921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.064006 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.072916 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.066640 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.066954 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.067257 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.067452 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.067558 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.067851 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068111 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068129 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068398 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068511 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068696 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068906 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.068948 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.069060 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.069471 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.069732 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.070038 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.070246 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.023281 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.070451 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.070633 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.070709 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.071377 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.071915 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.075396 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.075706 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.076099 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.076154 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.076574 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.076938 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.077146 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.077724 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.078705 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.080161 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.080314 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.080708 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.080803 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.081052 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.081207 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.081517 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.081584 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.082711 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.082982 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.083114 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.083507 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.083573 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.083694 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.106802 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.107011 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.107037 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.107258 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.109462 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.109945 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.109969 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.109982 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.109992 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110028 4745 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110039 4745 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110049 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110060 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110072 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110101 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110111 4745 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110120 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110129 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110139 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110171 4745 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110182 4745 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110191 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110203 4745 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110214 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110224 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110250 4745 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110263 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110273 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110282 4745 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110294 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110303 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110331 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110343 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110351 4745 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110360 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110370 4745 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110379 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110405 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110415 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110425 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110435 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110447 4745 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110459 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110486 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110498 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110507 4745 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110515 4745 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110544 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110556 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110583 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110593 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110604 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110633 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110642 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110651 4745 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110661 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110672 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110682 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110710 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110719 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110728 4745 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110737 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110746 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110757 4745 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110786 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110796 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110805 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110814 4745 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110824 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110833 4745 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110859 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110870 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110880 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110888 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110898 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110908 4745 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110917 4745 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110945 4745 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110954 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110962 4745 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110972 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110981 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.110997 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111024 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111034 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111045 4745 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111055 4745 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111065 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111075 4745 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111101 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111111 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111119 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111128 4745 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111137 4745 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111145 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111154 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111181 4745 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111190 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111199 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111208 4745 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111222 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111231 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111259 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111271 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111281 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111289 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111298 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111308 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111338 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111346 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111355 4745 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111363 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111372 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111381 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111390 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111416 4745 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111425 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111434 4745 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111443 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111451 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111461 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111492 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111504 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111513 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111523 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111568 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111576 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111585 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111593 4745 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111601 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111610 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111638 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111649 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111659 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111667 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111676 4745 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111687 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111715 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111727 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111735 4745 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111745 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111754 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111763 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.111773 4745 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.113554 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.113914 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.114189 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.114770 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.120642 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.121312 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.123324 4745 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.123466 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.123512 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.123534 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.123639 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:16.623611417 +0000 UTC m=+21.084399015 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.125085 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.125169 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:16.625141559 +0000 UTC m=+21.085929147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.126031 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.127940 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.132502 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.132601 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.133250 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.134571 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.135602 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.135933 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.136104 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.136552 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.137652 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.138793 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.140342 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.140632 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.140861 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.141177 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.141503 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.141695 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.141772 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.142290 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.142456 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.145106 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.145539 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.146012 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.147053 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.147385 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.148224 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.148474 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.148725 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149012 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149021 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149579 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149676 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149847 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.149959 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.150175 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.151099 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.151978 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.152262 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.154162 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.155159 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.155458 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.155461 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.156135 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.156757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.157218 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.158094 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.159611 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.164497 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.165626 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.165842 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.167202 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.167540 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.167599 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.167656 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.167817 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:16.667750223 +0000 UTC m=+21.128537821 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.180234 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.180607 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.184733 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.190148 4745 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.193297 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.200661 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.201176 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.202486 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.208175 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213753 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213804 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213860 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213873 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213883 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213892 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213902 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213911 4745 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213923 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213934 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213945 4745 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213954 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213962 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213972 4745 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213983 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.213993 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214002 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214011 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214020 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214029 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214039 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214047 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214056 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214065 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214074 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214082 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214091 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214101 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214110 4745 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214119 4745 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214129 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214140 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214149 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214159 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214168 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214178 4745 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214188 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214197 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214214 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214222 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214231 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214240 4745 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214249 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214449 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214458 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214466 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214475 4745 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214484 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214495 4745 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214505 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214483 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214581 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214514 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214659 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214675 4745 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.214820 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.218265 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.218338 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.242298 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.263922 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.271015 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.294252 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.324723 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.325770 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.325820 4745 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.628220 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.628731 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.628770 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.628792 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.628933 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629005 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:17.628984825 +0000 UTC m=+22.089772423 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629076 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:17.629070027 +0000 UTC m=+22.089857625 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629113 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629137 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:17.629130219 +0000 UTC m=+22.089917827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629203 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629214 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629227 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.629247 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:17.629241962 +0000 UTC m=+22.090029560 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.651932 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-21 10:32:15 +0000 UTC, rotation deadline is 2026-11-09 11:58:04.087869505 +0000 UTC Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.652032 4745 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7009h20m47.435848266s for next certificate rotation Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.680814 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.729573 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.729840 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.729872 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.729889 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: E0121 10:37:16.729969 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:17.729946757 +0000 UTC m=+22.190734355 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.731325 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.762553 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.790200 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.806237 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.825755 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.860182 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.916592 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.926720 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 13:55:00.263223817 +0000 UTC Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.939314 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.951500 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:16 crc kubenswrapper[4745]: I0121 10:37:16.973323 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.172762 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"24f1c1c6f95ae49e847cdefd7d5207f820e0883988855653f41672e96394be39"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.174514 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.174828 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d9f8ac444b8105027b577522620a7706475ac4c981b42572153cb6daf50bd344"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.176106 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.176165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.176186 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"68fa2d958b361b8f10cbd928cdcefae366ae5187014d0511965b5b4b68310d76"} Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.222960 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.304110 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.346761 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-7855h"] Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.347220 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: W0121 10:37:17.362902 4745 reflector.go:561] object-"openshift-dns"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 21 10:37:17 crc kubenswrapper[4745]: W0121 10:37:17.362943 4745 reflector.go:561] object-"openshift-dns"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.362986 4745 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:37:17 crc kubenswrapper[4745]: W0121 10:37:17.362908 4745 reflector.go:561] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": failed to list *v1.Secret: secrets "node-resolver-dockercfg-kz9s7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-dns": no relationship found between node 'crc' and this object Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.363033 4745 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"node-resolver-dockercfg-kz9s7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-resolver-dockercfg-kz9s7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.363051 4745 reflector.go:158] "Unhandled Error" err="object-\"openshift-dns\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-dns\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.366224 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.446191 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bf705b0-6d21-4c31-ab5f-7439aa4607af-hosts-file\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.446248 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hd6g\" (UniqueName: \"kubernetes.io/projected/8bf705b0-6d21-4c31-ab5f-7439aa4607af-kube-api-access-2hd6g\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.547505 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bf705b0-6d21-4c31-ab5f-7439aa4607af-hosts-file\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.547591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hd6g\" (UniqueName: \"kubernetes.io/projected/8bf705b0-6d21-4c31-ab5f-7439aa4607af-kube-api-access-2hd6g\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.548069 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8bf705b0-6d21-4c31-ab5f-7439aa4607af-hosts-file\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.648676 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.648799 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.648824 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.648856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.648951 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649021 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:19.649001925 +0000 UTC m=+24.109789523 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649080 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:19.649073267 +0000 UTC m=+24.109860865 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649195 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649210 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649222 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649245 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:19.649238911 +0000 UTC m=+24.110026509 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649302 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.649326 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:19.649320714 +0000 UTC m=+24.110108312 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.749819 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.750051 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.750073 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.750089 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:17 crc kubenswrapper[4745]: E0121 10:37:17.750158 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:19.750135462 +0000 UTC m=+24.210923060 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.915649 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:17 crc kubenswrapper[4745]: I0121 10:37:17.928754 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 02:47:00.123312436 +0000 UTC Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.099881 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:18 crc kubenswrapper[4745]: E0121 10:37:18.100011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.100068 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:18 crc kubenswrapper[4745]: E0121 10:37:18.100107 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.100147 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:18 crc kubenswrapper[4745]: E0121 10:37:18.100188 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.103057 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.104174 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.105633 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.106402 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.107590 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.108131 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.108893 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.110062 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.110793 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.111785 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.112261 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.114878 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.115458 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.115935 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.116511 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.117069 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.117808 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.118276 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.118853 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.119423 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.120498 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.121041 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.121825 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.122314 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.123031 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.124000 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.124471 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.125051 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.125533 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.126054 4745 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.126180 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.130083 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.130734 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.131186 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.142418 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.143440 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.144579 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.149256 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.150838 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.151375 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.152799 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.153472 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.154516 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.155009 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.156038 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.302454 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.302967 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.308400 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.678157 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.692067 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hd6g\" (UniqueName: \"kubernetes.io/projected/8bf705b0-6d21-4c31-ab5f-7439aa4607af-kube-api-access-2hd6g\") pod \"node-resolver-7855h\" (UID: \"8bf705b0-6d21-4c31-ab5f-7439aa4607af\") " pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.724869 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.738604 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.763405 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.818733 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.826466 4745 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.837442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.837505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.837518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.837648 4745 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.870466 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-7855h" Jan 21 10:37:18 crc kubenswrapper[4745]: W0121 10:37:18.925102 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf705b0_6d21_4c31_ab5f_7439aa4607af.slice/crio-03af9fd90ae34f99242d644b89dae437f0dfeaf05f2df77a63996621d08aea08 WatchSource:0}: Error finding container 03af9fd90ae34f99242d644b89dae437f0dfeaf05f2df77a63996621d08aea08: Status 404 returned error can't find the container with id 03af9fd90ae34f99242d644b89dae437f0dfeaf05f2df77a63996621d08aea08 Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.971882 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 14:04:43.685253268 +0000 UTC Jan 21 10:37:18 crc kubenswrapper[4745]: I0121 10:37:18.972009 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.014409 4745 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.014849 4745 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.027960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.028017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.028028 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.028057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.028072 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.041425 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.074860 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.075325 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.093167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.093209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.093222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.093244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.093259 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.097038 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.106513 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.117776 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.133415 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.133468 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.133483 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.133514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.133532 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.166150 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.168313 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.182374 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.182425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.182439 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.182458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.182472 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.184251 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7855h" event={"ID":"8bf705b0-6d21-4c31-ab5f-7439aa4607af","Type":"ContainerStarted","Data":"03af9fd90ae34f99242d644b89dae437f0dfeaf05f2df77a63996621d08aea08"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.195314 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.200104 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.213244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.213278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.213287 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.213306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.213318 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.220881 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.282625 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.290589 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.290804 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.292905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.292936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.292945 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.292977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.292988 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.382746 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-p8q45"] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.383229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.386882 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-b8tqm"] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.392053 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.392209 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.392493 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.392683 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.392821 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.393000 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395615 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l7mcj"] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395819 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.395832 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.408422 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-pnnzc"] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.409130 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.409502 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.410299 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.410430 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.410516 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.410556 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.411249 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.415680 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.415779 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.415863 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416074 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416208 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416352 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416506 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416651 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.416825 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.418453 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.456245 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475100 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475141 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475191 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8abb3db-dbf8-4568-a6dc-c88674d222b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475208 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-cni-binary-copy\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhmz4\" (UniqueName: \"kubernetes.io/projected/25458900-3da2-4c9d-8463-9acde2add0a6-kube-api-access-jhmz4\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475462 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475505 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-etc-kubernetes\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475537 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475587 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-system-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475608 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475631 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-os-release\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475656 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-socket-dir-parent\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475680 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-hostroot\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475702 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-multus-certs\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475724 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475749 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475784 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475809 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abb3db-dbf8-4568-a6dc-c88674d222b1-proxy-tls\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475835 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475862 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475888 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-cnibin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475913 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-k8s-cni-cncf-io\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475939 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475963 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.475992 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476022 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmg59\" (UniqueName: \"kubernetes.io/projected/a8abb3db-dbf8-4568-a6dc-c88674d222b1-kube-api-access-wmg59\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476051 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-netns\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476299 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-kubelet\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476378 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476435 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8abb3db-dbf8-4568-a6dc-c88674d222b1-rootfs\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476456 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476509 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476602 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-multus\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476673 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-multus-daemon-config\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476757 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-conf-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476790 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476816 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476858 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-bin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.476873 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf85x\" (UniqueName: \"kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.491836 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.499207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.499274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.499286 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.499307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.499334 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.538967 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577203 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577367 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-etc-kubernetes\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577409 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577425 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-system-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577472 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-multus-certs\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577491 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577510 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577527 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577603 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-os-release\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577628 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-socket-dir-parent\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577660 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-hostroot\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577686 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577720 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-os-release\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577796 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abb3db-dbf8-4568-a6dc-c88674d222b1-proxy-tls\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577829 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577866 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577896 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577920 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577950 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.577975 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-cnibin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578013 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-k8s-cni-cncf-io\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578034 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578055 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmg59\" (UniqueName: \"kubernetes.io/projected/a8abb3db-dbf8-4568-a6dc-c88674d222b1-kube-api-access-wmg59\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578080 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578102 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-cnibin\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578135 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-netns\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578157 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-kubelet\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578179 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578200 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-binary-copy\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578222 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578287 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8abb3db-dbf8-4568-a6dc-c88674d222b1-rootfs\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578308 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578333 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhwll\" (UniqueName: \"kubernetes.io/projected/37687014-8686-4419-980d-e754a7f7037f-kube-api-access-jhwll\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-multus\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578384 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-multus-daemon-config\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578417 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-conf-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578440 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578462 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578490 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf85x\" (UniqueName: \"kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578518 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-bin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578562 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578585 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578605 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578623 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8abb3db-dbf8-4568-a6dc-c88674d222b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-cni-binary-copy\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578656 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhmz4\" (UniqueName: \"kubernetes.io/projected/25458900-3da2-4c9d-8463-9acde2add0a6-kube-api-access-jhmz4\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578671 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578701 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578716 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-system-cni-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578838 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-etc-kubernetes\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578883 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.578936 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-system-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579027 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-multus-certs\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579049 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579083 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579116 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579176 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-os-release\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579226 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-socket-dir-parent\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579261 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-hostroot\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.579301 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580722 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-cni-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580766 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580801 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-netns\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580839 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-kubelet\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580866 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580900 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580904 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-bin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580958 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581060 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-var-lib-cni-multus\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.580925 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/a8abb3db-dbf8-4568-a6dc-c88674d222b1-rootfs\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581576 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581661 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581695 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-cnibin\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581759 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8abb3db-dbf8-4568-a6dc-c88674d222b1-mcd-auth-proxy-config\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581765 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-host-run-k8s-cni-cncf-io\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581832 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581866 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581873 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/25458900-3da2-4c9d-8463-9acde2add0a6-multus-conf-dir\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.581991 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-cni-binary-copy\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.582143 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.582702 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.582825 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/25458900-3da2-4c9d-8463-9acde2add0a6-multus-daemon-config\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.585924 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8abb3db-dbf8-4568-a6dc-c88674d222b1-proxy-tls\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.595037 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.607980 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.608036 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.608046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.608064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.608074 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.679779 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680196 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-cnibin\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680408 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-binary-copy\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680481 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhwll\" (UniqueName: \"kubernetes.io/projected/37687014-8686-4419-980d-e754a7f7037f-kube-api-access-jhwll\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680596 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680665 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680739 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680824 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680888 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-system-cni-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.680969 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-os-release\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.681157 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-os-release\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.681313 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:23.681289062 +0000 UTC m=+28.142076650 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.682061 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.682176 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-cnibin\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.682667 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/37687014-8686-4419-980d-e754a7f7037f-cni-binary-copy\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683236 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683337 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683418 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683532 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:23.683514603 +0000 UTC m=+28.144302201 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683702 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.683798 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:23.683787111 +0000 UTC m=+28.144574709 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.684228 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-tuning-conf-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.684365 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.684469 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:23.684455479 +0000 UTC m=+28.145243077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.684597 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/37687014-8686-4419-980d-e754a7f7037f-system-cni-dir\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.710962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.711380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.711474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.711584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.711685 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.778797 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmg59\" (UniqueName: \"kubernetes.io/projected/a8abb3db-dbf8-4568-a6dc-c88674d222b1-kube-api-access-wmg59\") pod \"machine-config-daemon-b8tqm\" (UID: \"a8abb3db-dbf8-4568-a6dc-c88674d222b1\") " pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.781844 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.782165 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814190 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.814442 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.782357 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.886485 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:19 crc kubenswrapper[4745]: E0121 10:37:19.886679 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:23.886648031 +0000 UTC m=+28.347435629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.916810 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.916889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.916905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.916927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.916941 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:19Z","lastTransitionTime":"2026-01-21T10:37:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.974605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf85x\" (UniqueName: \"kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x\") pod \"ovnkube-node-l7mcj\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.977688 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:54:53.28661366 +0000 UTC Jan 21 10:37:19 crc kubenswrapper[4745]: I0121 10:37:19.981975 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhmz4\" (UniqueName: \"kubernetes.io/projected/25458900-3da2-4c9d-8463-9acde2add0a6-kube-api-access-jhmz4\") pod \"multus-p8q45\" (UID: \"25458900-3da2-4c9d-8463-9acde2add0a6\") " pod="openshift-multus/multus-p8q45" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.000016 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.000100 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.000026 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:20 crc kubenswrapper[4745]: E0121 10:37:20.000220 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:20 crc kubenswrapper[4745]: E0121 10:37:20.000317 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:20 crc kubenswrapper[4745]: E0121 10:37:20.000392 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.026136 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.048585 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p8q45" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054048 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054875 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.054899 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.158928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.159435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.159445 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.159463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.159473 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.189469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-7855h" event={"ID":"8bf705b0-6d21-4c31-ab5f-7439aa4607af","Type":"ContainerStarted","Data":"854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.190464 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"d5accf1adb2c50c251ba04041bcd212e05c044118907d8628c7daa54af5b84ed"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.191287 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"d85e55dccd1fb174d7b216681b68d6a41210777186d3c98e6d0b5e947a8dc82d"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.192179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerStarted","Data":"08a223c109fba9faec90ecafb20e8122696942ce367aaf203e06d85b4bef8f4b"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.262342 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.262388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.262396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.262413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.262424 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.367861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.368081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.368099 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.368113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.368123 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.378791 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.384188 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhwll\" (UniqueName: \"kubernetes.io/projected/37687014-8686-4419-980d-e754a7f7037f-kube-api-access-jhwll\") pod \"multus-additional-cni-plugins-pnnzc\" (UID: \"37687014-8686-4419-980d-e754a7f7037f\") " pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.449893 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.479807 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.479844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.479855 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.479872 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.479886 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.501681 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.547047 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.581709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.581754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.581766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.581786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.581799 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.647051 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.653318 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: W0121 10:37:20.664397 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37687014_8686_4419_980d_e754a7f7037f.slice/crio-7d7e78bc8b9ea379e28028ca1343393f37f534384f19fd33f839dc8e08b5e040 WatchSource:0}: Error finding container 7d7e78bc8b9ea379e28028ca1343393f37f534384f19fd33f839dc8e08b5e040: Status 404 returned error can't find the container with id 7d7e78bc8b9ea379e28028ca1343393f37f534384f19fd33f839dc8e08b5e040 Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.687271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.687317 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.687329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.687353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.687367 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.748024 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.790158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.790211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.790222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.790244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.790257 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.818740 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.858628 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.900928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.900979 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.900994 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.901009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.901018 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:20Z","lastTransitionTime":"2026-01-21T10:37:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.901041 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.950327 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.978046 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 16:44:47.104082277 +0000 UTC Jan 21 10:37:20 crc kubenswrapper[4745]: I0121 10:37:20.989589 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.015086 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.015157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.015173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.015200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.015218 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.034722 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.052234 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.067087 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.080846 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.093868 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.107384 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.118155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.118314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.118384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.118464 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.118553 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.197584 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" exitCode=0 Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.197709 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.200015 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.200050 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.201966 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerStarted","Data":"099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.202907 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerStarted","Data":"7d7e78bc8b9ea379e28028ca1343393f37f534384f19fd33f839dc8e08b5e040"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.204367 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.221216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.221255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.221267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.221290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.221307 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.222142 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.238899 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.254904 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.282218 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.302091 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.323596 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.330004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.330038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.330050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.330070 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.330085 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.359366 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.384569 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.403602 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.420914 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.439198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.439247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.439258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.439278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.439287 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.448893 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.486120 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.503130 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.518961 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.537260 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.545280 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.545485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.545579 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.545673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.545751 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.584353 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.609865 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.644653 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.662742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.662777 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.662786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.662802 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.662811 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.701887 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.743797 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.764993 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.765208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.765226 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.765236 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.765255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.765268 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.781006 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.805755 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.820919 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.839328 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kf868"] Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.839895 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.850520 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.850843 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.851101 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.851255 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.852723 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.873628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.873666 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.873675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.873694 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.873706 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.893718 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.915274 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4d23707-4f4b-4424-a350-f952443dcc4f-host\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.915386 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbs6w\" (UniqueName: \"kubernetes.io/projected/f4d23707-4f4b-4424-a350-f952443dcc4f-kube-api-access-cbs6w\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.915414 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f4d23707-4f4b-4424-a350-f952443dcc4f-serviceca\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.921608 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.943122 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.976110 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978167 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 04:35:19.0997582 +0000 UTC Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:21 crc kubenswrapper[4745]: I0121 10:37:21.978621 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:21Z","lastTransitionTime":"2026-01-21T10:37:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.001464 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:22 crc kubenswrapper[4745]: E0121 10:37:22.001951 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.002347 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:22 crc kubenswrapper[4745]: E0121 10:37:22.002400 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.002449 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:22 crc kubenswrapper[4745]: E0121 10:37:22.002501 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.016332 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4d23707-4f4b-4424-a350-f952443dcc4f-host\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.016422 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbs6w\" (UniqueName: \"kubernetes.io/projected/f4d23707-4f4b-4424-a350-f952443dcc4f-kube-api-access-cbs6w\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.016448 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f4d23707-4f4b-4424-a350-f952443dcc4f-serviceca\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.017619 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f4d23707-4f4b-4424-a350-f952443dcc4f-serviceca\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.017678 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f4d23707-4f4b-4424-a350-f952443dcc4f-host\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.022345 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.035683 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbs6w\" (UniqueName: \"kubernetes.io/projected/f4d23707-4f4b-4424-a350-f952443dcc4f-kube-api-access-cbs6w\") pod \"node-ca-kf868\" (UID: \"f4d23707-4f4b-4424-a350-f952443dcc4f\") " pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.057119 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.078300 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.088504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.088572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.088584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.088601 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.088613 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.093941 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.120733 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.134330 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.158907 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.174820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kf868" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.207878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.207936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.207949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.207981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.207997 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.212297 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.234444 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.241277 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kf868" event={"ID":"f4d23707-4f4b-4424-a350-f952443dcc4f","Type":"ContainerStarted","Data":"5b1a5316f9b91ff7bb5438fe50d201eb4a8dd2164573356d28a81f4bf80025ef"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.254855 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.265386 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.265432 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.265445 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.265458 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.269335 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2" exitCode=0 Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.269611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.273905 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.299207 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.312499 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.312543 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.312553 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.312568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.312578 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.339261 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.365943 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.397094 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.416618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.416693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.416707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.416729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.416741 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.424631 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.442109 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.471637 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.505014 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.523348 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.524501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.524658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.524673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.524695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.524713 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.541659 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.571189 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.588569 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.604995 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.617483 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:22Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.628357 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.628407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.628421 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.628442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.628453 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.732572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.732623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.732635 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.732657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.732669 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.835843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.836204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.836271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.836336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.836396 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.939919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.939957 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.939966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.939984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.939995 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:22Z","lastTransitionTime":"2026-01-21T10:37:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:22 crc kubenswrapper[4745]: I0121 10:37:22.978587 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 05:29:15.396533603 +0000 UTC Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.052704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.052743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.052754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.052773 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.052789 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.169009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.169650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.169663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.169688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.169701 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.281349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.281382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.281394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.281410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.281419 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.293587 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kf868" event={"ID":"f4d23707-4f4b-4424-a350-f952443dcc4f","Type":"ContainerStarted","Data":"34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.299155 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerStarted","Data":"8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.303766 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.303821 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.320012 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.339066 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.351989 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.366063 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.387189 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.387578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.387674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.387745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.387836 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.390293 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.407136 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.434033 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.452907 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.469434 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.484296 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.501524 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.501831 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.501907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.501977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.502048 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.505214 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.518970 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.531379 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.546268 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.562137 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.577078 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.591741 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.603454 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.605682 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.605723 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.605733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.605753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.605765 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.624405 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.647832 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.666656 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.680243 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.707590 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.708759 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.708877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.708951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.709022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.709099 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.725945 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.759839 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.762111 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762275 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:31.762245528 +0000 UTC m=+36.223033126 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.762341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.762398 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.762432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762457 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762522 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:31.762507876 +0000 UTC m=+36.223295474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762579 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762611 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:31.762602948 +0000 UTC m=+36.223390546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762667 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762707 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762723 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.762789 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:31.762765683 +0000 UTC m=+36.223553281 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.792229 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.811215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.811257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.811270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.811294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.811308 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.815819 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.847786 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:23Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.913671 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.913712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.913725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.913744 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.913756 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:23Z","lastTransitionTime":"2026-01-21T10:37:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.964393 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.964430 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.964445 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:23 crc kubenswrapper[4745]: E0121 10:37:23.964521 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:31.964498342 +0000 UTC m=+36.425285940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.965004 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.980075 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 12:34:42.914812842 +0000 UTC Jan 21 10:37:23 crc kubenswrapper[4745]: I0121 10:37:23.999736 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:23.999782 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:23.999737 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:24 crc kubenswrapper[4745]: E0121 10:37:23.999909 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:24 crc kubenswrapper[4745]: E0121 10:37:24.000011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:24 crc kubenswrapper[4745]: E0121 10:37:24.000145 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.018425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.018496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.018553 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.018586 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.018614 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.121103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.121136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.121152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.121171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.121182 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.224984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.225053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.225072 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.225099 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.225117 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.309663 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab" exitCode=0 Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.309732 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.328061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.328097 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.328105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.328121 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.328132 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.330994 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.350188 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.368408 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.382354 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.399805 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.414837 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.431918 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.431960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.431970 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.431989 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.432000 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.436340 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.453015 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.468341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.479745 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.496322 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.512237 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.525236 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.536445 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.536489 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.536502 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.536540 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.536556 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.543392 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:24Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.639360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.639402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.639413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.639429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.639440 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.741601 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.741646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.741688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.741712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.741723 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.844653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.844698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.844713 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.844734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.844752 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.947470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.947587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.947607 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.947642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.947663 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:24Z","lastTransitionTime":"2026-01-21T10:37:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:24 crc kubenswrapper[4745]: I0121 10:37:24.980931 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 06:57:41.545741562 +0000 UTC Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.050795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.050838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.050851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.050867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.050878 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.154016 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.154064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.154073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.154091 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.154102 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.257673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.257754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.257800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.257832 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.257856 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.315722 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3" exitCode=0 Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.315803 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.322579 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.334029 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.361818 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.361846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.361854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.361869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.361880 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.366624 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.382037 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.395858 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.410065 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.421006 4745 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.423794 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.435966 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.448955 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.463953 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.464935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.464996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.465005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.465020 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.465029 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.477156 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.491824 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.504206 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.518517 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.534325 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:25Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.566805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.566854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.566867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.566887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.566900 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.669131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.669192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.669206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.669227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.669239 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.772813 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.772866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.772878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.772902 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.772915 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.876147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.876199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.876213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.876234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.876250 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.979133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.979183 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.979198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.979220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.979236 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:25Z","lastTransitionTime":"2026-01-21T10:37:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:25 crc kubenswrapper[4745]: I0121 10:37:25.982870 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 06:30:18.770022273 +0000 UTC Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:25.999969 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.000027 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.000065 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:26 crc kubenswrapper[4745]: E0121 10:37:26.000139 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:26 crc kubenswrapper[4745]: E0121 10:37:26.000269 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:26 crc kubenswrapper[4745]: E0121 10:37:26.000386 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.021509 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.037252 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.052874 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.069009 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.084425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.084764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.084897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.085084 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.085743 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.087953 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.103622 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.122687 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.137686 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.153151 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.167314 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.188722 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.188769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.188785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.188806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.188818 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.190065 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.204984 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.216830 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.227045 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.291076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.291116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.291127 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.291142 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.291153 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.330357 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2" exitCode=0 Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.330427 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.348587 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.366437 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.390317 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.394010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.394067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.394081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.394123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.394150 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.405330 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.422235 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.436561 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.447948 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.461057 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.474368 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.490264 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.499458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.499510 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.499526 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.499656 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.499672 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.512834 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.528894 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.547278 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.561857 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:26Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.602207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.602255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.602267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.602283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.602293 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.709006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.709063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.709076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.709094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.709105 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.812753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.813105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.813116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.813134 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.813146 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.916208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.916263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.916275 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.916298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.916311 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:26Z","lastTransitionTime":"2026-01-21T10:37:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:26 crc kubenswrapper[4745]: I0121 10:37:26.983863 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:49:40.519442685 +0000 UTC Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.019864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.019925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.019942 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.019969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.019984 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.122580 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.122628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.122640 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.122659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.122673 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.158287 4745 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.225745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.225791 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.225805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.226466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.226553 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.329369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.329407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.329417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.329435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.329447 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.343122 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerStarted","Data":"029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.348841 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.349131 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.349156 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.367265 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.382362 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.391398 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.393301 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.416458 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.431427 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.431473 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.431484 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.431501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.431515 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.434938 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.452883 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.472429 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.484917 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.500357 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.514141 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.534592 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.536475 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.536555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.536574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.536593 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.536603 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.561760 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.573834 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.586710 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.604577 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.626127 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.672309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.672394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.672408 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.672431 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.672448 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.690872 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.703974 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.721177 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.735312 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.750603 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.777022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.777052 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.777063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.777077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.777086 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.788659 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.802626 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.822179 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.839107 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.860744 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.878238 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.880241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.880299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.880310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.880335 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.880349 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.896728 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.913989 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:27Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.982702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.983088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.983098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.983113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.983121 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:27Z","lastTransitionTime":"2026-01-21T10:37:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:27 crc kubenswrapper[4745]: I0121 10:37:27.984714 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:58:30.21778375 +0000 UTC Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.000766 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:28 crc kubenswrapper[4745]: E0121 10:37:28.000917 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.001296 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:28 crc kubenswrapper[4745]: E0121 10:37:28.001350 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.001407 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:28 crc kubenswrapper[4745]: E0121 10:37:28.001450 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.085682 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.085719 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.085728 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.085743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.085753 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.187717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.187757 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.187767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.187783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.187795 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.290642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.290702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.290718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.290742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.290758 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.353042 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.394163 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.394396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.394465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.394581 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.394662 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.497162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.497406 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.497551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.497650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.498059 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.601382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.601438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.601454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.601477 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.601492 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.705611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.705661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.705676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.705692 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.705703 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.808059 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.808090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.808098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.808110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.808121 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.910583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.910622 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.910633 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.910650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.910662 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:28Z","lastTransitionTime":"2026-01-21T10:37:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:28 crc kubenswrapper[4745]: I0121 10:37:28.984851 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 06:04:41.162081153 +0000 UTC Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.013300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.013329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.013337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.013352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.013361 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.115598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.115657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.115668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.115689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.115700 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.218432 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.218492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.218507 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.218546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.218563 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.321103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.321200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.321212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.321481 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.321503 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.360417 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e" exitCode=0 Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.360475 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.360615 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.375925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.376015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.376030 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.376049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.376089 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.388540 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.393067 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.397425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.397498 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.397515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.397563 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.397583 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.416157 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.416706 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.422220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.422263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.422273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.422291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.422303 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.434608 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.436160 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.441125 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.441156 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.441167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.441187 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.441201 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.450939 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.455865 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.460337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.460380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.460391 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.460407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.460419 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.466997 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.477427 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: E0121 10:37:29.477638 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.482619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.482650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.482661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.482681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.482707 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.493750 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.529196 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.554557 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.578504 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.585219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.585256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.585266 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.585285 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.585305 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.596219 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.626358 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.644916 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.658910 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.681726 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:29Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.687314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.687463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.687574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.687655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.687711 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.790430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.790473 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.790484 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.790501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.790513 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.893870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.893925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.893937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.893967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.893977 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.985829 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 09:10:26.794632331 +0000 UTC Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.996084 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.996381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.996400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.996414 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.996423 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:29Z","lastTransitionTime":"2026-01-21T10:37:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:29 crc kubenswrapper[4745]: I0121 10:37:29.999889 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:29.999986 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:30 crc kubenswrapper[4745]: E0121 10:37:30.000036 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.000106 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:30 crc kubenswrapper[4745]: E0121 10:37:30.000186 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:30 crc kubenswrapper[4745]: E0121 10:37:30.000270 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.098813 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.098882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.098896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.098919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.098933 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.202055 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.202115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.202128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.202150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.202166 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.305561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.305603 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.305614 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.305660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.305677 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.368754 4745 generic.go:334] "Generic (PLEG): container finished" podID="37687014-8686-4419-980d-e754a7f7037f" containerID="f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a" exitCode=0 Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.368850 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerDied","Data":"f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.394208 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.411109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.411151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.411164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.411182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.411196 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.417560 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.434897 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.449702 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.464636 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.480991 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.498768 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514170 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514386 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514423 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.514467 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.531327 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.549091 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.562907 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.577952 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.598167 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.614680 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:30Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.616956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.616989 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.616997 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.617015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.617028 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.720183 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.720225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.720234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.720251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.720263 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.823845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.824307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.824417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.824783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.824895 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.928559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.928611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.928619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.928639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.928650 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:30Z","lastTransitionTime":"2026-01-21T10:37:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:30 crc kubenswrapper[4745]: I0121 10:37:30.987008 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:36:27.887278679 +0000 UTC Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.032508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.032946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.033212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.033416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.033625 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.136200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.136505 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.136622 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.136698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.136764 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.239621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.239698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.239710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.239738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.239750 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.342243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.342281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.342290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.342306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.342318 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.375752 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/0.log" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.379633 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4" exitCode=1 Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.379714 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.380817 4745 scope.go:117] "RemoveContainer" containerID="8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.386962 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" event={"ID":"37687014-8686-4419-980d-e754a7f7037f","Type":"ContainerStarted","Data":"78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.399853 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.421088 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.438693 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.447203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.447266 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.447279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.447295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.447307 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.456995 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.481103 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.498115 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.515128 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.527317 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.542018 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.551093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.551467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.551600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.551718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.551824 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.558039 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.578835 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.592615 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.613627 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.626038 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.641945 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.654938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.654977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.654986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.655000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.655011 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.657351 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.669471 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.686031 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.701707 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.715118 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.730807 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.749181 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.758395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.758442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.758454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.758471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.758484 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.764656 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.776282 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.801245 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.816931 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.831839 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.843329 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:31Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.854743 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.854857 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.854889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.854922 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.854996 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:37:47.854979625 +0000 UTC m=+52.315767223 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855058 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855091 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:47.855085268 +0000 UTC m=+52.315872866 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855104 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855104 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855269 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855298 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855209 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:47.855182351 +0000 UTC m=+52.315969949 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:31 crc kubenswrapper[4745]: E0121 10:37:31.855393 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:47.855357556 +0000 UTC m=+52.316145324 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.862370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.862419 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.862433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.862454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.862467 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.965332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.965368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.965379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.965397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.965407 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:31Z","lastTransitionTime":"2026-01-21T10:37:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.988374 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 03:13:54.243382761 +0000 UTC Jan 21 10:37:31 crc kubenswrapper[4745]: I0121 10:37:31.999773 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:31.999793 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:31.999935 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.000024 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.000125 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.000281 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.057130 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.057348 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.057396 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.057409 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:32 crc kubenswrapper[4745]: E0121 10:37:32.057469 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:48.057450425 +0000 UTC m=+52.518238023 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.067655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.067716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.067726 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.067742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.067754 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.170919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.170953 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.170963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.170980 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.170992 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.273262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.273342 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.273356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.273388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.273409 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.376869 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.376933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.376948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.376982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.377002 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.392386 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/0.log" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.395230 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.395396 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.411341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.434592 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.449551 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.466207 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.479781 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.479838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.479854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.479877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.479893 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.483961 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.501302 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.516213 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.529852 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.547690 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.564858 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.592126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.592194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.592204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.592224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.592235 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.593491 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.612284 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.632713 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.648985 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:32Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.695355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.695402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.695412 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.695429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.695443 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.798590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.798690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.798703 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.798721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.798733 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.901653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.901704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.901716 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.901736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.901753 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:32Z","lastTransitionTime":"2026-01-21T10:37:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:32 crc kubenswrapper[4745]: I0121 10:37:32.988908 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:22:02.68217265 +0000 UTC Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.005365 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.005436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.005446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.005466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.005478 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.108178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.108241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.108259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.108699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.108738 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.212328 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.212403 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.212415 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.212438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.212453 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.295504 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx"] Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.296364 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.299275 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.301255 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.315867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.315897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.315907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.315928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.315941 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.322922 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.336709 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.352435 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.369411 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.379412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.379465 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.379550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4436701b-89b4-411a-acc4-95be1ca116a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.379593 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p26p\" (UniqueName: \"kubernetes.io/projected/4436701b-89b4-411a-acc4-95be1ca116a9-kube-api-access-8p26p\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.391548 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.411774 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.418866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.418919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.418929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.418948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.418962 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.433156 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.452721 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.467335 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.480295 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4436701b-89b4-411a-acc4-95be1ca116a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.480413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p26p\" (UniqueName: \"kubernetes.io/projected/4436701b-89b4-411a-acc4-95be1ca116a9-kube-api-access-8p26p\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.480495 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.480596 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.481505 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.482119 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4436701b-89b4-411a-acc4-95be1ca116a9-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.487019 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.489552 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4436701b-89b4-411a-acc4-95be1ca116a9-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.505335 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p26p\" (UniqueName: \"kubernetes.io/projected/4436701b-89b4-411a-acc4-95be1ca116a9-kube-api-access-8p26p\") pod \"ovnkube-control-plane-749d76644c-tnqtx\" (UID: \"4436701b-89b4-411a-acc4-95be1ca116a9\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.513471 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.522330 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.522355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.522366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.522384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.522397 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.531691 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.550511 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.569981 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.584861 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:33Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.615065 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.626609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.626657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.626668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.626687 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.626700 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: W0121 10:37:33.636282 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4436701b_89b4_411a_acc4_95be1ca116a9.slice/crio-a4cf3b38075b8019d7ba4403110b4f73049bf7f5113ed9b63c9f531ba3b74212 WatchSource:0}: Error finding container a4cf3b38075b8019d7ba4403110b4f73049bf7f5113ed9b63c9f531ba3b74212: Status 404 returned error can't find the container with id a4cf3b38075b8019d7ba4403110b4f73049bf7f5113ed9b63c9f531ba3b74212 Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.729497 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.729610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.729628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.729655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.729672 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.832851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.832894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.832903 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.832923 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.832935 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.936150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.936201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.936211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.936231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.936248 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:33Z","lastTransitionTime":"2026-01-21T10:37:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:33 crc kubenswrapper[4745]: I0121 10:37:33.989374 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 17:54:15.01390618 +0000 UTC Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.002271 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.002412 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.003254 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.003354 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.003499 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.003597 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.039032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.039092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.039107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.039164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.039186 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.142023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.142070 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.142084 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.142105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.142120 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.245490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.245567 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.245581 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.245602 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.245615 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.348831 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.348900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.348922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.348949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.348971 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.387493 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-px52r"] Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.388238 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.388329 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.392427 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.392551 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vssx\" (UniqueName: \"kubernetes.io/projected/df21a803-8072-4f8f-8f3a-00267f9c3419-kube-api-access-2vssx\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.404971 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" event={"ID":"4436701b-89b4-411a-acc4-95be1ca116a9","Type":"ContainerStarted","Data":"909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.405028 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" event={"ID":"4436701b-89b4-411a-acc4-95be1ca116a9","Type":"ContainerStarted","Data":"db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.405041 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" event={"ID":"4436701b-89b4-411a-acc4-95be1ca116a9","Type":"ContainerStarted","Data":"a4cf3b38075b8019d7ba4403110b4f73049bf7f5113ed9b63c9f531ba3b74212"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.407258 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/1.log" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.408077 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/0.log" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.411503 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674" exitCode=1 Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.411582 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.411497 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.411666 4745 scope.go:117] "RemoveContainer" containerID="8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.412574 4745 scope.go:117] "RemoveContainer" containerID="a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.412763 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.429268 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.443799 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.451626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.451683 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.451696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.451718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.451734 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.457513 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.471454 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.485848 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.493314 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.493407 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vssx\" (UniqueName: \"kubernetes.io/projected/df21a803-8072-4f8f-8f3a-00267f9c3419-kube-api-access-2vssx\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.494130 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.494286 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:34.99425104 +0000 UTC m=+39.455038818 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.507049 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.519388 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vssx\" (UniqueName: \"kubernetes.io/projected/df21a803-8072-4f8f-8f3a-00267f9c3419-kube-api-access-2vssx\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.527674 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.543797 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.554826 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.554871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.554882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.554913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.554930 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.560186 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.573488 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.597153 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.616113 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.632929 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.649082 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.657822 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.657879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.657888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.657912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.657926 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.669555 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.684625 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.707986 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8960d518e94a1ae92eee3eaca11c12a1ef44a5802f7d519f12e4a7cf03556eb4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"message\\\":\\\"91 5886 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0121 10:37:30.453922 5886 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0121 10:37:30.453938 5886 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0121 10:37:30.453950 5886 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0121 10:37:30.453973 5886 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0121 10:37:30.454000 5886 factory.go:656] Stopping watch factory\\\\nI0121 10:37:30.454036 5886 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0121 10:37:30.454051 5886 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0121 10:37:30.454060 5886 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:30.454070 5886 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0121 10:37:30.454088 5886 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0121 10:37:30.454087 5886 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454258 5886 handler.go:208] Removed *v1.Node event handler 7\\\\nI0121 10:37:30.454296 5886 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:30.454350 5886 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.729465 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.744217 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.760916 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.760954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.760965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.760996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.761009 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.761417 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.777252 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.794056 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.809215 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.825273 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.841643 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.856132 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.864259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.864326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.864339 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.864360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.864379 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.872026 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.889371 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.904409 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.919420 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.935053 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:34Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.967773 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.967825 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.967836 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.967860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.967872 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:34Z","lastTransitionTime":"2026-01-21T10:37:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.989940 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:36:53.164998355 +0000 UTC Jan 21 10:37:34 crc kubenswrapper[4745]: I0121 10:37:34.998717 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.998949 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:34 crc kubenswrapper[4745]: E0121 10:37:34.999553 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:35.999503194 +0000 UTC m=+40.460290792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.070799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.070845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.070856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.070870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.070880 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.174050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.174099 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.174114 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.174133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.174145 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.272924 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.278846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.278902 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.278915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.278938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.278954 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.383643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.383702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.383715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.383734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.383748 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.418428 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/1.log" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.424070 4745 scope.go:117] "RemoveContainer" containerID="a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674" Jan 21 10:37:35 crc kubenswrapper[4745]: E0121 10:37:35.424827 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.442619 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.458474 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.474131 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.486594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.486665 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.486684 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.486710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.486728 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.492571 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.510838 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.529969 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.544841 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.556685 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.573741 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.587488 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.589604 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.589747 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.589816 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.589881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.589945 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.603640 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.622724 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.638062 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.653288 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.717504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.717586 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.717598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.717619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.717633 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.723219 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.737629 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:35Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.821158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.821200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.821210 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.821228 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.821241 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.924014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.924069 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.924082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.924101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.924114 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:35Z","lastTransitionTime":"2026-01-21T10:37:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.990855 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:01:32.949110547 +0000 UTC Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.999572 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:35 crc kubenswrapper[4745]: E0121 10:37:35.999743 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:35 crc kubenswrapper[4745]: I0121 10:37:35.999891 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:36 crc kubenswrapper[4745]: E0121 10:37:36.000080 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.000212 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:36 crc kubenswrapper[4745]: E0121 10:37:36.000320 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.000326 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:36 crc kubenswrapper[4745]: E0121 10:37:36.000430 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.019411 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:36 crc kubenswrapper[4745]: E0121 10:37:36.019598 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:36 crc kubenswrapper[4745]: E0121 10:37:36.019657 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:38.019637017 +0000 UTC m=+42.480424615 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.019813 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.028001 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.028053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.028067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.028091 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.028106 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.039775 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.054808 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.075038 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.094764 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.112111 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.128071 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.130904 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.131025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.131146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.131234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.131322 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.164917 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.191731 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.212871 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.233477 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.233809 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.233871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.233979 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.234049 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.235422 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.261222 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.279208 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.298002 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.314900 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.334760 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:36Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.336568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.336629 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.336646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.336677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.336695 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.438903 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.438934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.438944 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.438957 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.438971 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.541659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.541699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.541710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.541731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.541743 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.644838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.645209 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.645277 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.645343 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.645408 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.748988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.749045 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.749058 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.749082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.749096 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.851644 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.851968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.852034 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.852100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.852197 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.956590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.956913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.957082 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.957190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.957293 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:36Z","lastTransitionTime":"2026-01-21T10:37:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:36 crc kubenswrapper[4745]: I0121 10:37:36.992074 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:22:19.794553136 +0000 UTC Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.060396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.060845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.060967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.061065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.061210 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.164568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.164618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.164629 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.164646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.164657 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.267984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.268027 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.268038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.268056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.268067 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.370863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.370908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.370919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.370942 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.370954 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.473582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.473628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.473643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.473667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.473680 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.577137 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.577194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.577208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.577227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.577245 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.680390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.680857 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.680951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.681040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.681118 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.784345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.784858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.785113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.785213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.785304 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.888053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.888424 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.888585 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.888702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.888818 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.991966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.992014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.992024 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.992043 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.992055 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:37Z","lastTransitionTime":"2026-01-21T10:37:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.992587 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 09:00:01.627440446 +0000 UTC Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.999576 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:37 crc kubenswrapper[4745]: I0121 10:37:37.999633 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:37.999650 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:37.999615 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:37.999814 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:37.999967 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:38.000032 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:38.000132 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.040148 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:38.040347 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:38 crc kubenswrapper[4745]: E0121 10:37:38.040433 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:42.040404478 +0000 UTC m=+46.501192076 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.095315 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.095704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.095741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.095785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.095804 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.199326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.199364 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.199373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.199388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.199398 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.303237 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.303308 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.303322 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.303346 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.303360 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.406519 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.406588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.406601 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.406625 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.406639 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.509459 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.509548 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.509570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.509593 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.509605 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.611882 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.611928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.611943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.611963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.611978 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.715358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.715430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.715444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.715469 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.715486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.818709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.818756 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.818769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.818785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.818798 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.921748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.921812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.921830 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.921851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.921866 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:38Z","lastTransitionTime":"2026-01-21T10:37:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:38 crc kubenswrapper[4745]: I0121 10:37:38.993516 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:11:46.166753288 +0000 UTC Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.024952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.025000 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.025013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.025031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.025048 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.127621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.127665 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.127675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.127697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.127709 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.230769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.230826 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.230848 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.230874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.230886 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.334094 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.334142 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.334163 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.334185 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.334198 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.442087 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.442149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.442162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.442182 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.442194 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.545075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.545884 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.545984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.546070 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.546148 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.648768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.649173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.649265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.649360 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.649441 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.752490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.752597 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.752613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.752636 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.752647 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.762243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.762281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.762290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.762308 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.762318 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.778155 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.782649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.782860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.782964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.783061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.783149 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.798592 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.802986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.803023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.803033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.803051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.803062 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.815666 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.819906 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.820047 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.820125 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.820208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.820278 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.834312 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.839039 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.839192 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.839267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.839381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.839468 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.853224 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:39Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:39 crc kubenswrapper[4745]: E0121 10:37:39.853423 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.855645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.855681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.855697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.855717 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.855732 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.959086 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.959139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.959151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.959187 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.959230 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:39Z","lastTransitionTime":"2026-01-21T10:37:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:39 crc kubenswrapper[4745]: I0121 10:37:39.994506 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:21:55.352931003 +0000 UTC Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.000018 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.000079 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.000018 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.000179 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:40 crc kubenswrapper[4745]: E0121 10:37:40.000200 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:40 crc kubenswrapper[4745]: E0121 10:37:40.000281 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:40 crc kubenswrapper[4745]: E0121 10:37:40.000340 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:40 crc kubenswrapper[4745]: E0121 10:37:40.000466 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.062707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.062749 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.062760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.062784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.062797 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.165738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.165827 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.165841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.165863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.165881 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.269141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.269194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.269207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.269229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.269242 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.372033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.372081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.372092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.372119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.372134 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.475155 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.475213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.475231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.475252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.475265 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.578073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.578131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.578142 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.578164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.578177 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.681463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.681517 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.681545 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.681567 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.681583 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.785772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.785842 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.785856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.785894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.785907 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.889645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.890006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.890087 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.890180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.890273 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.994064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.994128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.994139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.994161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.994172 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:40Z","lastTransitionTime":"2026-01-21T10:37:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:40 crc kubenswrapper[4745]: I0121 10:37:40.995155 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 06:36:49.128739349 +0000 UTC Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.097553 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.097599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.097609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.097627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.097641 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.200846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.200908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.200929 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.200958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.200975 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.303348 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.303397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.303407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.303428 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.303441 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.405425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.405795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.405861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.405937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.406003 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.509172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.509245 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.509259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.509282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.509300 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.611964 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.612025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.612042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.612063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.612083 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.715123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.715168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.715180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.715203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.715222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.818249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.818295 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.818308 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.818331 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.818345 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.921452 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.921508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.921518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.921561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.921580 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:41Z","lastTransitionTime":"2026-01-21T10:37:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.996234 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 11:11:38.989204718 +0000 UTC Jan 21 10:37:41 crc kubenswrapper[4745]: I0121 10:37:41.999631 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:41.999756 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:41.999802 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:41.999910 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.000357 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.000492 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.000614 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.000672 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.024037 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.024126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.024139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.024159 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.024174 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.077944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.078169 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:42 crc kubenswrapper[4745]: E0121 10:37:42.078262 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:37:50.078235815 +0000 UTC m=+54.539023413 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.127558 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.127626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.127642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.127669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.127685 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.231400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.231460 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.231474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.231496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.231510 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.334613 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.334697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.334710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.334736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.334754 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.438075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.438124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.438136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.438158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.438173 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.541106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.541161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.541175 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.541196 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.541211 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.644439 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.644498 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.644514 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.644591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.644616 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.747806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.747887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.747920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.747955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.747977 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.851171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.851256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.851282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.851323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.851365 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.954921 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.954983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.954996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.955018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.955030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:42Z","lastTransitionTime":"2026-01-21T10:37:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:42 crc kubenswrapper[4745]: I0121 10:37:42.997472 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 03:19:36.489955603 +0000 UTC Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.058664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.058727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.058742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.058769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.058795 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.162033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.162092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.162102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.162122 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.162135 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.271105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.271169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.271183 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.271210 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.271226 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.351655 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.362830 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.367262 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.373989 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.374055 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.374072 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.374101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.374117 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.384512 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.401332 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.419642 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.438017 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.457602 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.475838 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.476825 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.476867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.476879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.476897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.476908 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.498671 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.514919 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.534257 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.549733 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.568411 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.579437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.579480 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.579490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.579555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.579570 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.587341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.603674 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.619955 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.643430 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:43Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.682800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.682867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.682878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.682900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.682912 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.786300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.786688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.786781 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.786936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.787155 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.890996 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.891042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.891053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.891072 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.891090 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.993738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.993788 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.993803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.993827 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.993843 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:43Z","lastTransitionTime":"2026-01-21T10:37:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:43 crc kubenswrapper[4745]: I0121 10:37:43.997845 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 17:09:04.985212334 +0000 UTC Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.000716 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.000855 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.000738 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:44 crc kubenswrapper[4745]: E0121 10:37:44.001060 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.000845 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:44 crc kubenswrapper[4745]: E0121 10:37:44.001319 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:44 crc kubenswrapper[4745]: E0121 10:37:44.001369 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:44 crc kubenswrapper[4745]: E0121 10:37:44.001590 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.096412 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.096457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.096470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.096490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.096501 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.199121 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.199615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.199856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.200080 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.200281 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.304225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.304618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.304772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.304876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.304971 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.407857 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.407923 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.407935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.407957 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.407970 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.518420 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.518477 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.518494 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.518520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.518572 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.623691 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.623727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.623738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.623765 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.623783 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.727172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.727242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.727256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.727281 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.727296 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.830715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.830785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.830799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.830841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.830859 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.934319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.934648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.934679 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.934698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.934713 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:44Z","lastTransitionTime":"2026-01-21T10:37:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:44 crc kubenswrapper[4745]: I0121 10:37:44.998948 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:01:25.903025123 +0000 UTC Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.038111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.038150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.038169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.038191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.038204 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.143895 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.144362 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.144503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.144694 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.144831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.249284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.249345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.249361 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.249387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.249405 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.352253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.352299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.352310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.352329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.352341 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.455668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.455729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.455740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.455762 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.455792 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.559249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.559720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.559832 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.559961 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.560050 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.663038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.663078 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.663087 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.663104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.663115 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.766172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.766224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.766236 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.766255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.766268 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.869719 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.869767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.869776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.869793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.869803 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.973442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.973495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.973506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.973553 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.973565 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:45Z","lastTransitionTime":"2026-01-21T10:37:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.999308 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.999364 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.999326 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 02:18:19.661745748 +0000 UTC Jan 21 10:37:45 crc kubenswrapper[4745]: E0121 10:37:45.999500 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.999552 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:45 crc kubenswrapper[4745]: E0121 10:37:45.999683 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:45 crc kubenswrapper[4745]: I0121 10:37:45.999787 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:45 crc kubenswrapper[4745]: E0121 10:37:45.999861 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:45 crc kubenswrapper[4745]: E0121 10:37:45.999966 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.020494 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.037659 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.051341 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.069215 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.076496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.076559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.076570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.076591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.076604 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.090446 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.106299 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.124052 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.136304 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.159735 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.173496 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.183071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.183351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.183425 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.183488 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.183562 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.195059 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.214576 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.228193 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.241398 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.255824 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.266804 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.279001 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:46Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.286409 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.286446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.286456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.286474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.286486 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.389673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.390211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.390225 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.390246 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.390258 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.493169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.493590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.493700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.493771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.493830 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.597382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.597434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.597446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.597464 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.597476 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.699970 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.700469 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.700701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.700783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.700873 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.805641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.805707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.805724 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.805748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.805765 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.908814 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.909229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.909345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.909437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:46 crc kubenswrapper[4745]: I0121 10:37:46.909509 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:46Z","lastTransitionTime":"2026-01-21T10:37:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.000196 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 08:45:41.145562989 +0000 UTC Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.012889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.012947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.012965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.013006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.013028 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.116612 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.117023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.117105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.117178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.117276 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.221073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.221397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.222013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.222253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.222374 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.325436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.325508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.325522 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.325572 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.325587 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.428282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.428380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.428758 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.428800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.428813 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.531866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.531934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.531946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.531969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.531988 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.635649 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.635708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.635720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.635739 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.635752 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.738642 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.738698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.738713 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.738736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.738795 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.841438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.841506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.841520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.841568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.841584 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.931855 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.931935 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.931961 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.932003 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932090 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932139 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:19.932124443 +0000 UTC m=+84.392912051 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932324 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:38:19.932314418 +0000 UTC m=+84.393102026 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932401 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932415 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932426 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932453 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:19.932444921 +0000 UTC m=+84.393232529 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932789 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: E0121 10:37:47.932925 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:19.932898794 +0000 UTC m=+84.393686432 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.943793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.943862 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.943897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.943931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:47 crc kubenswrapper[4745]: I0121 10:37:47.943957 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:47Z","lastTransitionTime":"2026-01-21T10:37:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.000035 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.000229 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.000055 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.000649 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.000682 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.000568 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:21:02.351297259 +0000 UTC Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.000959 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.001246 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.001405 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.047799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.048219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.048356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.048466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.048599 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.134760 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.134964 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.134987 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.135026 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:48 crc kubenswrapper[4745]: E0121 10:37:48.135076 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:20.135060135 +0000 UTC m=+84.595847743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.152194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.152241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.152252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.152269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.152280 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.254958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.255003 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.255013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.255035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.255045 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.358493 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.358577 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.358595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.358622 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.358640 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.461515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.461919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.462030 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.462106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.462281 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.566052 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.566133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.566159 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.566201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.566261 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.669854 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.669932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.669952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.669983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.670000 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.772956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.773292 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.773499 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.773696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.773868 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.876763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.877961 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.878071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.878171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.878254 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.981683 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.981727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.981740 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.981761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:48 crc kubenswrapper[4745]: I0121 10:37:48.981775 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:48Z","lastTransitionTime":"2026-01-21T10:37:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.000958 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 17:52:04.251954018 +0000 UTC Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.084960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.085014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.085033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.085056 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.085073 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.187745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.187787 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.187798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.187816 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.187829 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.290314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.290721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.290787 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.290865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.290984 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.393436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.393809 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.394131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.394373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.394448 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.497283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.497682 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.497868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.498074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.498172 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.600115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.600424 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.600561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.600736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.600877 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.703703 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.703750 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.703763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.703807 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.703824 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.806354 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.806399 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.806409 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.806430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.806443 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.909256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.910005 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.910083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.910154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.910225 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:49Z","lastTransitionTime":"2026-01-21T10:37:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.999282 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:49 crc kubenswrapper[4745]: I0121 10:37:49.999380 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:49 crc kubenswrapper[4745]: E0121 10:37:49.999957 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.000029 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:49.999389 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:49.999388 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.000160 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.000414 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.001616 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:11:34.432086837 +0000 UTC Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.013718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.013776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.013790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.013812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.013827 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.102876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.102933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.102949 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.102967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.102981 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.117945 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.121914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.121958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.121968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.121983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.121994 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.135007 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.139114 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.139183 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.139198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.139217 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.139231 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.153740 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.154330 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.154521 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:06.154495578 +0000 UTC m=+70.615283176 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.155033 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.159663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.159706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.159718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.159736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.159746 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.172958 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.176955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.177107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.177199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.177297 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.177434 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.192917 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:50Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:50Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:50 crc kubenswrapper[4745]: E0121 10:37:50.193367 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.195546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.195785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.195797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.195816 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.195831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.298496 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.298555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.298566 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.298582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.298592 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.402049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.402111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.402130 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.402152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.402164 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.505835 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.506278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.506381 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.506491 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.506585 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.609234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.609297 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.609321 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.609352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.609378 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.712864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.713310 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.713380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.713478 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.713590 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.816017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.816071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.816083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.816102 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.816116 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.920014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.920699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.920738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.920768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:50 crc kubenswrapper[4745]: I0121 10:37:50.920787 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:50Z","lastTransitionTime":"2026-01-21T10:37:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.000206 4745 scope.go:117] "RemoveContainer" containerID="a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.001809 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:02:34.150841197 +0000 UTC Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.023538 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.023569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.023580 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.023595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.023604 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.126466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.126893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.126969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.127035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.127093 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.230751 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.230806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.231058 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.231086 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.231097 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.334018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.334061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.334070 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.334090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.334102 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.445833 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.445876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.445887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.445907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.445921 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.494838 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/1.log" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.507654 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.508786 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.527813 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.549576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.549608 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.549616 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.549632 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.549645 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.553390 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.572550 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.593581 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.612295 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.627997 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.645204 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.652574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.652623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.652634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.653177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.653222 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.663144 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.679574 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.697028 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.710287 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.725244 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.740637 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.757177 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.757241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.757254 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.757273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.757285 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.759554 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.772950 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.787001 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.808496 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:51Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.863053 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.863113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.863126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.863144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.863157 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.966268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.966318 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.966329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.966349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.966361 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:51Z","lastTransitionTime":"2026-01-21T10:37:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:51 crc kubenswrapper[4745]: I0121 10:37:51.999901 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.000001 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:52 crc kubenswrapper[4745]: E0121 10:37:52.000087 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:51.999901 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:52 crc kubenswrapper[4745]: E0121 10:37:52.000159 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:52 crc kubenswrapper[4745]: E0121 10:37:52.000222 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.000655 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:52 crc kubenswrapper[4745]: E0121 10:37:52.000898 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.002643 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:43:14.594934878 +0000 UTC Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.070120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.070167 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.070178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.070203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.070217 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.173191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.173257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.173267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.173284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.173297 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.275955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.276021 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.276032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.276049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.276061 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.378801 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.378846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.378857 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.378877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.378892 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.481874 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.481943 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.481958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.481981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.481997 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.512126 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/2.log" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.512675 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/1.log" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.515301 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" exitCode=1 Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.515377 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.515449 4745 scope.go:117] "RemoveContainer" containerID="a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.516162 4745 scope.go:117] "RemoveContainer" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" Jan 21 10:37:52 crc kubenswrapper[4745]: E0121 10:37:52.516359 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.535727 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.550085 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.562339 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.575559 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.584928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.584962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.584972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.584990 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.585001 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.591966 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.611151 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.626181 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.638409 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.652746 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.668483 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.688446 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.688559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.688576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.688600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.688633 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.699214 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.742904 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3c5d56bfba394f36a4b882dcb657019089a8776dee3c0ca1f7fd25140f3b674\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"message\\\":\\\"Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-etcd-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-etcd-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.188\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0121 10:37:32.602342 6075 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0121 10:37:32.602138 6075 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.765792 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.785553 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.791952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.792004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.792017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.792061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.792074 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.800355 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.816657 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.832763 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:52Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.895554 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.895595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.895605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.895627 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:52 crc kubenswrapper[4745]: I0121 10:37:52.895637 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:52Z","lastTransitionTime":"2026-01-21T10:37:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.000518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.000667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.000697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.000731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.000755 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.003590 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 19:01:52.575310999 +0000 UTC Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.103564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.103764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.103828 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.103897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.103956 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.207443 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.207815 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.207883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.207995 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.208062 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.311231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.311265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.311274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.311291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.311303 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.414578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.414950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.415088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.415178 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.415291 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.520046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.520105 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.520120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.520143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.520156 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.525266 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/2.log" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.530196 4745 scope.go:117] "RemoveContainer" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" Jan 21 10:37:53 crc kubenswrapper[4745]: E0121 10:37:53.530459 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.546776 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.571301 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.585844 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.600281 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.613234 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.623337 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.623370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.623383 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.623401 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.623411 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.630239 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.643693 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.662167 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.678484 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.694026 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.710350 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.725432 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.726570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.726634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.726651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.726676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.726691 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.742078 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.757332 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.773525 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.789552 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.807507 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:53Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.829565 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.829612 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.829620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.829638 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.829648 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.932901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.932951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.932963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.932980 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.932991 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:53Z","lastTransitionTime":"2026-01-21T10:37:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:53 crc kubenswrapper[4745]: I0121 10:37:53.999801 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:54 crc kubenswrapper[4745]: E0121 10:37:53.999986 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.000196 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.000201 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:54 crc kubenswrapper[4745]: E0121 10:37:54.000238 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:54 crc kubenswrapper[4745]: E0121 10:37:54.000408 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.000448 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:54 crc kubenswrapper[4745]: E0121 10:37:54.000511 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.003747 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 20:49:38.515731435 +0000 UTC Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.035788 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.035871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.035885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.035910 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.035926 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.139347 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.139400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.139410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.139430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.139717 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.243179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.243250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.243259 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.243283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.243296 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.346239 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.346305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.346316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.346336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.346348 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.449263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.449307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.449319 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.449340 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.449353 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.552397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.552455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.552474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.552502 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.552519 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.655264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.655305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.655314 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.655333 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.655343 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.759039 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.759099 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.759111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.759137 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.759191 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.862372 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.862419 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.862431 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.862453 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.862468 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.965071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.965126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.965138 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.965156 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:54 crc kubenswrapper[4745]: I0121 10:37:54.965168 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:54Z","lastTransitionTime":"2026-01-21T10:37:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.003909 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:44:44.217083449 +0000 UTC Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.069279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.069341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.069356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.069379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.069395 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.172598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.172654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.172664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.172685 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.172702 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.276176 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.276248 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.276264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.276315 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.276330 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.379123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.379197 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.379211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.379235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.379249 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.482339 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.482393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.482408 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.482430 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.482442 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.585195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.585258 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.585270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.585288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.585300 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.688040 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.688092 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.688103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.688124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.688144 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.791365 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.791435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.791448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.791471 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.791489 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.895151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.895220 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.895242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.895273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:55 crc kubenswrapper[4745]: I0121 10:37:55.895295 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:55Z","lastTransitionTime":"2026-01-21T10:37:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.000069 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.000134 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:56 crc kubenswrapper[4745]: E0121 10:37:56.000234 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.001668 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.006621 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:56 crc kubenswrapper[4745]: E0121 10:37:56.007057 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:56 crc kubenswrapper[4745]: E0121 10:37:56.007256 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:56 crc kubenswrapper[4745]: E0121 10:37:56.007354 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007522 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 20:09:44.145830063 +0000 UTC Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007606 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007712 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.007728 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.021246 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.046920 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.066227 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.081408 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.095918 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110261 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110296 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.110352 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.127183 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.142123 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.158856 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.174457 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.188998 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.202832 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.212631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.212681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.212693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.212715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.212746 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.217717 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.229920 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.245838 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.259463 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.273924 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:37:56Z is after 2025-08-24T17:21:41Z" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.316128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.316196 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.316213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.316241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.316257 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.419231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.419657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.419804 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.419928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.420009 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.523064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.523101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.523109 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.523125 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.523136 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.626982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.627032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.627041 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.627062 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.627074 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.730489 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.730557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.730570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.730592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.730608 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.834695 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.835223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.835307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.835392 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.835458 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.938352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.938406 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.938417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.938439 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:56 crc kubenswrapper[4745]: I0121 10:37:56.938452 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:56Z","lastTransitionTime":"2026-01-21T10:37:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.008576 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 08:26:21.094278085 +0000 UTC Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.041280 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.041362 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.041378 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.041397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.041410 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.144427 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.144834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.144948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.145064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.145155 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.250984 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.251029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.251042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.251064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.251079 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.353759 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.354154 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.354250 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.354346 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.354436 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.457730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.457779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.457791 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.457811 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.457823 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.560581 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.560628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.560638 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.560655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.560668 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.664189 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.664707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.664804 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.664897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.664982 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.767799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.768144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.768228 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.768338 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.768435 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.871693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.872140 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.872244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.872349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.872445 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.976750 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.976810 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.976821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.976844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:57 crc kubenswrapper[4745]: I0121 10:37:57.976856 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:57Z","lastTransitionTime":"2026-01-21T10:37:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.000121 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.000118 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.000214 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.000301 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:37:58 crc kubenswrapper[4745]: E0121 10:37:58.000416 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:37:58 crc kubenswrapper[4745]: E0121 10:37:58.000587 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:37:58 crc kubenswrapper[4745]: E0121 10:37:58.000678 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:37:58 crc kubenswrapper[4745]: E0121 10:37:58.000844 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.009630 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 02:20:43.010182012 +0000 UTC Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.079312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.079352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.079366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.079386 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.079398 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.183061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.183115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.183128 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.183148 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.183160 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.286579 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.286631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.286640 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.286659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.286671 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.390584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.390645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.390658 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.390681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.390724 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.494036 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.494442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.494560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.494681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.494831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.598515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.598586 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.598599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.598620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.598633 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.702115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.702598 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.702715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.702814 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.702920 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.806435 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.806905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.807146 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.807289 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.807408 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.910145 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.910583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.910705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.910830 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:58 crc kubenswrapper[4745]: I0121 10:37:58.910931 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:58Z","lastTransitionTime":"2026-01-21T10:37:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.010744 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 05:57:45.917180166 +0000 UTC Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.014306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.014351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.014369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.014393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.014409 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.117834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.118266 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.118356 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.118450 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.118595 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.221739 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.222323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.222396 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.222469 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.222545 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.325902 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.325948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.325959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.325977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.325989 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.428844 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.428888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.428899 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.428919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.428931 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.531921 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.531975 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.531988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.532010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.532025 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.635394 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.635922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.635935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.635953 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.635963 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.739088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.739456 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.739596 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.739731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.739839 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.843119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.843170 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.843181 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.843204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.843215 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.946664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.946706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.946736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.946761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:37:59 crc kubenswrapper[4745]: I0121 10:37:59.946771 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:37:59Z","lastTransitionTime":"2026-01-21T10:37:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.000129 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.000152 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.000198 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.001134 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.000211 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.000884 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.001262 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.001408 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.011338 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:40:05.458417508 +0000 UTC Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.049843 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.049886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.049894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.049913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.049924 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.153734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.153851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.153864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.154152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.154173 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.258120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.258195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.258215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.258256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.258274 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.360960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.361008 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.361017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.361035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.361046 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.452537 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.452588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.452603 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.452622 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.452634 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.466130 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.470260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.470298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.470309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.470329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.470340 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.484120 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.489133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.489165 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.489180 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.489199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.489210 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.502486 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.507420 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.507467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.507481 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.507506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.507521 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.522731 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.527404 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.527490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.527501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.527546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.527558 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.540981 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:00Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:00 crc kubenswrapper[4745]: E0121 10:38:00.541133 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.543302 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.543353 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.543366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.543387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.543441 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.646595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.646637 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.646648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.646667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.646679 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.749584 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.749696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.749709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.749733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.749745 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.852804 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.852855 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.852865 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.852889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.852902 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.955928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.956023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.956048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.956072 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:00 crc kubenswrapper[4745]: I0121 10:38:00.956087 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:00Z","lastTransitionTime":"2026-01-21T10:38:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.012523 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 22:52:49.001974445 +0000 UTC Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.060041 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.060100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.060110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.060132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.060144 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.163587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.164019 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.164104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.164204 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.164306 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.267410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.267889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.267989 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.268063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.268196 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.371718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.372359 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.372436 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.372524 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.372658 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.475264 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.475311 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.475320 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.475339 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.475350 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.578513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.578938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.579006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.579081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.579148 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.686050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.686101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.686115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.686139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.686155 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.788735 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.788769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.788785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.788808 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.788823 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.890891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.890936 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.890946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.890963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.890973 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.993861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.993916 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.993930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.993958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.993977 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:01Z","lastTransitionTime":"2026-01-21T10:38:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.999692 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.999764 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.999712 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:01 crc kubenswrapper[4745]: E0121 10:38:01.999857 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:01 crc kubenswrapper[4745]: I0121 10:38:01.999720 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:02 crc kubenswrapper[4745]: E0121 10:38:02.000043 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:02 crc kubenswrapper[4745]: E0121 10:38:02.000191 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:02 crc kubenswrapper[4745]: E0121 10:38:02.000243 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.013456 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:52:08.56228592 +0000 UTC Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.097518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.097596 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.097611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.097635 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.097650 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.200248 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.200693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.200772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.200875 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.200967 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.304060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.304104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.304116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.304136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.304149 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.408623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.408689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.408701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.408720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.408732 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.511492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.511556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.511570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.511591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.511620 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.614327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.614411 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.614433 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.614461 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.614475 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.716920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.716969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.716981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.717001 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.717015 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.819780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.819849 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.819864 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.819888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.819903 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.922513 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.922924 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.923067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.923171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:02 crc kubenswrapper[4745]: I0121 10:38:02.923255 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:02Z","lastTransitionTime":"2026-01-21T10:38:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.013298 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.013574 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:07:53.131775114 +0000 UTC Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.026595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.026645 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.026657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.026681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.026697 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.130428 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.130493 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.130504 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.130556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.130571 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.233038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.233416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.233552 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.233661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.233784 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.337244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.337329 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.337345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.337367 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.337407 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.439944 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.439994 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.440007 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.440032 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.440048 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.543725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.543780 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.543793 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.543816 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.543832 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.652644 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.652697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.652707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.652725 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.652736 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.755866 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.755925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.755937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.755959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.755975 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.859149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.859200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.859213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.859234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.859248 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.962448 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.962520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.962564 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.962587 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:03 crc kubenswrapper[4745]: I0121 10:38:03.962603 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:03Z","lastTransitionTime":"2026-01-21T10:38:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:03.999922 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:03.999974 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:03.999930 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.000156 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:04 crc kubenswrapper[4745]: E0121 10:38:04.000148 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:04 crc kubenswrapper[4745]: E0121 10:38:04.000265 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:04 crc kubenswrapper[4745]: E0121 10:38:04.000339 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:04 crc kubenswrapper[4745]: E0121 10:38:04.000389 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.014011 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:58:40.277879171 +0000 UTC Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.065852 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.065893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.065907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.065927 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.065942 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.168509 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.168583 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.168594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.168611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.168624 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.271384 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.271429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.271439 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.271458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.271470 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.374610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.374660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.374670 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.374688 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.374701 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.479667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.479739 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.479752 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.479778 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.479795 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.582696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.582742 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.582752 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.582776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.582793 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.685546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.685606 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.685617 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.685637 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.685649 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.788260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.788325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.788338 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.788362 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.788378 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.891202 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.891248 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.891257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.891278 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.891289 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.993782 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.993841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.993855 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.993877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:04 crc kubenswrapper[4745]: I0121 10:38:04.993893 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:04Z","lastTransitionTime":"2026-01-21T10:38:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.014444 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:18:25.726806133 +0000 UTC Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.097595 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.097651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.097665 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.097690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.097704 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.201006 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.201065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.201076 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.201100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.201117 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.304752 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.304825 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.304838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.304863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.304878 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.407832 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.407900 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.407924 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.407950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.407968 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.511065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.511106 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.511115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.511132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.511141 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.613973 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.614048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.614058 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.614077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.614087 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.716610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.716664 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.716686 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.716708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.716723 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.819485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.819550 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.819563 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.819582 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.819598 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.921913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.921956 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.921967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.921987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:05 crc kubenswrapper[4745]: I0121 10:38:05.922001 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:05Z","lastTransitionTime":"2026-01-21T10:38:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.000762 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.000996 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.001699 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.001779 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.001979 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.002101 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.002599 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.002699 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.015324 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 14:43:43.514401876 +0000 UTC Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.022253 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.024556 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.024601 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.024611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.024631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.024644 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.040581 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.056657 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.078915 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.097021 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.112977 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128097 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128138 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128166 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128177 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.128203 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.143546 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.162829 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.180694 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.198282 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.220922 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.231711 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.231772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.231783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.231805 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.231819 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.236902 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.243642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.243835 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:38:06 crc kubenswrapper[4745]: E0121 10:38:06.243909 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:38:38.243886502 +0000 UTC m=+102.704674100 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.251031 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.264730 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.279449 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.293954 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.306179 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:06Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.334475 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.334520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.334570 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.334591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.334601 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.437912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.437981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.437993 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.438018 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.438030 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.541007 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.541405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.541470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.541555 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.541628 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.644592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.645152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.645405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.645661 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.645827 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.749243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.749299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.749309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.749333 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.749346 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.852071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.852107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.852118 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.852133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.852146 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.954968 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.955046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.955060 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.955083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:06 crc kubenswrapper[4745]: I0121 10:38:06.955096 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:06Z","lastTransitionTime":"2026-01-21T10:38:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.000883 4745 scope.go:117] "RemoveContainer" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" Jan 21 10:38:07 crc kubenswrapper[4745]: E0121 10:38:07.001112 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.016298 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:00:05.647182703 +0000 UTC Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.057934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.057970 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.057982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.058003 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.058015 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.160144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.160175 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.160184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.160219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.160230 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.262954 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.263013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.263025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.263049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.263064 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.365691 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.365749 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.365760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.365779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.365805 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.468267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.468315 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.468326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.468352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.468364 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.571418 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.571463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.571474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.571492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.571503 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.678728 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.678779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.678790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.678808 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.678820 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.781705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.781771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.781784 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.781808 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.781823 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.885131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.885202 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.885216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.885242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.885259 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.988601 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.988669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.988686 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.988714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.988730 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:07Z","lastTransitionTime":"2026-01-21T10:38:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:07 crc kubenswrapper[4745]: I0121 10:38:07.999872 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:07.999995 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.000109 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:08 crc kubenswrapper[4745]: E0121 10:38:08.000021 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.000199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:08 crc kubenswrapper[4745]: E0121 10:38:08.000258 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:08 crc kubenswrapper[4745]: E0121 10:38:08.000351 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:08 crc kubenswrapper[4745]: E0121 10:38:08.000449 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.017164 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:47:17.34765116 +0000 UTC Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.091689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.091733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.091743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.091760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.091772 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.194920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.194958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.194972 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.194992 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.195006 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.297754 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.298171 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.298707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.299078 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.299465 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.402887 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.403268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.403457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.403546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.403621 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.506706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.506749 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.506758 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.506776 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.506826 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.610077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.610135 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.610149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.610169 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.610184 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.712986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.713038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.713048 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.713066 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.713077 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.815861 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.815909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.815926 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.815948 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.816062 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.919452 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.919953 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.920026 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.920107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:08 crc kubenswrapper[4745]: I0121 10:38:08.920196 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:08Z","lastTransitionTime":"2026-01-21T10:38:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.018025 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:50:37.793139807 +0000 UTC Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.022551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.022618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.022643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.022669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.022687 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.125067 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.125110 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.125126 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.125147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.125163 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.228132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.228179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.228196 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.228217 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.228230 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.331703 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.331767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.331779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.331800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.331811 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.435208 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.435539 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.435648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.435735 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.435803 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.538395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.538853 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.539606 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.539859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.540095 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.642792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.643201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.643395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.643479 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.643560 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.747273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.747327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.747339 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.747363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.747374 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.850875 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.850922 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.850933 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.850958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.850978 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.954195 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.954705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.954846 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.954981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.955094 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:09Z","lastTransitionTime":"2026-01-21T10:38:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.999900 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:09 crc kubenswrapper[4745]: I0121 10:38:09.999947 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.000142 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.000438 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.000548 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.000722 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.000847 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.001006 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.019514 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 18:17:47.740962125 +0000 UTC Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.057918 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.058245 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.058309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.058373 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.058463 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.161792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.162203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.162273 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.162341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.162405 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.265559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.266370 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.266457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.266552 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.266636 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.370028 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.370404 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.370465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.370540 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.370640 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.474141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.474508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.474644 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.474731 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.474811 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.578184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.578253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.578265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.578286 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.578297 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.585154 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/0.log" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.585312 4745 generic.go:334] "Generic (PLEG): container finished" podID="25458900-3da2-4c9d-8463-9acde2add0a6" containerID="099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af" exitCode=1 Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.585405 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerDied","Data":"099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.585916 4745 scope.go:117] "RemoveContainer" containerID="099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.615597 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.642791 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.658502 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.678891 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.684331 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.684367 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.684377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.684397 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.684408 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.693132 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.709883 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.724313 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.730197 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.730235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.730247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.730266 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.730278 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.744439 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.744770 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.750133 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.750164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.750176 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.750193 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.750205 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.759199 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.762477 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.766977 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.767023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.767036 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.767054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.767065 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.775397 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.779299 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.783256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.783288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.783299 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.783321 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.783336 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.791758 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.799374 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.803322 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.803358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.803368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.803387 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.803399 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.822681 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.827562 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: E0121 10:38:10.827955 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.829986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.830127 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.830187 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.830263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.830321 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.836822 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.848264 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.859768 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.873070 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.886493 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.900931 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:10Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.932596 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.932963 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.933176 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.933379 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:10 crc kubenswrapper[4745]: I0121 10:38:10.933578 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:10Z","lastTransitionTime":"2026-01-21T10:38:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.020879 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:04:59.746864051 +0000 UTC Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.035986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.036026 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.036035 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.036051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.036061 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.139792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.140249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.140375 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.140458 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.140544 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.243896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.244253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.244377 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.244452 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.244512 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.347698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.347737 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.347746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.347765 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.347776 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.450700 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.450935 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.451341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.451509 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.451749 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.555147 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.555521 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.555660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.555800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.555914 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.592094 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/0.log" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.592607 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerStarted","Data":"714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.607663 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.629000 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.645468 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.659090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.659141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.659152 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.659173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.659187 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.661157 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.680357 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.697237 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.709964 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.723788 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.739881 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.757222 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.761626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.761663 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.761673 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.761693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.761704 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.770793 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.786721 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.819737 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864354 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864760 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864802 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864837 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.864853 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.901584 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.916322 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.933204 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.952398 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:11Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.967932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.968004 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.968027 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.968051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:11 crc kubenswrapper[4745]: I0121 10:38:11.968065 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:11Z","lastTransitionTime":"2026-01-21T10:38:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.000305 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.000445 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.000674 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:12 crc kubenswrapper[4745]: E0121 10:38:12.000695 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:12 crc kubenswrapper[4745]: E0121 10:38:12.000727 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:12 crc kubenswrapper[4745]: E0121 10:38:12.000460 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.000811 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:12 crc kubenswrapper[4745]: E0121 10:38:12.000895 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.021598 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 21:51:45.791692269 +0000 UTC Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.071132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.071184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.071194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.071211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.071219 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.175334 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.175413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.175429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.175812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.175841 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.278873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.278914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.278923 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.278937 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.278948 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.382206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.382950 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.383065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.383150 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.383224 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.486231 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.486282 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.486293 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.486313 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.486326 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.589234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.589799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.589898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.590014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.590105 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.692880 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.693246 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.693335 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.693410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.693496 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.796238 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.796305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.796322 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.796347 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.796367 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.899641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.899698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.899708 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.899730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:12 crc kubenswrapper[4745]: I0121 10:38:12.899741 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:12Z","lastTransitionTime":"2026-01-21T10:38:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.002701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.002766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.002779 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.002798 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.002813 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.021987 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 02:56:11.338309811 +0000 UTC Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.105173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.105221 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.105234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.105257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.105273 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.208395 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.208437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.208449 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.208470 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.208482 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.311184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.311241 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.311252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.311269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.311284 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.414736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.414821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.414841 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.414897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.414918 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.517677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.517720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.517733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.517752 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.517766 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.621143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.621506 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.621615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.621766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.621869 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.725739 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.725797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.725811 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.725834 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.725851 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.828881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.828955 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.828966 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.828986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.828997 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.933122 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.933196 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.933215 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.933243 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:13 crc kubenswrapper[4745]: I0121 10:38:13.933258 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:13Z","lastTransitionTime":"2026-01-21T10:38:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.000184 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.000271 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.000197 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.000319 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:14 crc kubenswrapper[4745]: E0121 10:38:14.000380 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:14 crc kubenswrapper[4745]: E0121 10:38:14.000499 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:14 crc kubenswrapper[4745]: E0121 10:38:14.000613 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:14 crc kubenswrapper[4745]: E0121 10:38:14.000699 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.022301 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 17:01:51.811824178 +0000 UTC Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.036758 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.036809 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.036822 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.036847 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.036868 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.139263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.139355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.139369 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.139398 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.139412 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.241690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.241736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.241746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.241768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.241781 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.344051 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.344120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.344136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.344163 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.344180 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.446944 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.446999 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.447015 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.447042 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.447060 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.549680 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.549730 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.549743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.549766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.549781 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.651983 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.652038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.652049 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.652065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.652078 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.754901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.754952 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.754962 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.754982 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.754994 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.858283 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.858386 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.858402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.858426 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.858437 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.962546 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.962600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.962611 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.962631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:14 crc kubenswrapper[4745]: I0121 10:38:14.962645 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:14Z","lastTransitionTime":"2026-01-21T10:38:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.023555 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:12:47.585846777 +0000 UTC Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.064670 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.064721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.064733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.064753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.064766 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.167455 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.167509 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.167518 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.167569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.168136 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.270930 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.271312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.271466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.271592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.271683 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.374267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.374322 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.374332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.374349 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.374360 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.476989 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.477047 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.477059 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.477083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.477098 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.580298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.580745 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.580833 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.580947 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.581040 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.684889 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.684932 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.684944 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.684965 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.684978 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.788669 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.788714 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.788724 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.788747 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.788758 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.892806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.892845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.892858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.892877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.892888 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.996644 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.996681 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.996690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.996709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:15 crc kubenswrapper[4745]: I0121 10:38:15.996719 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:15Z","lastTransitionTime":"2026-01-21T10:38:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.000739 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:16 crc kubenswrapper[4745]: E0121 10:38:16.000855 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.001272 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:16 crc kubenswrapper[4745]: E0121 10:38:16.001340 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.001475 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:16 crc kubenswrapper[4745]: E0121 10:38:16.001550 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.001693 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:16 crc kubenswrapper[4745]: E0121 10:38:16.001771 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.016458 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.024199 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 15:05:35.519679854 +0000 UTC Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.035114 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.063304 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.083831 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100227 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100262 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100287 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100297 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.100797 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.119215 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.131703 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.144230 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.161357 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.179209 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.195293 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.206568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.206609 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.206618 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.206635 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.206646 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.219551 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.238662 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.258094 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.274587 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.291258 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.305622 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.310135 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.310181 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.310194 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.310216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.310229 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.318217 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:16Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.412660 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.412707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.412720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.412744 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.412757 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.516641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.516707 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.516720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.516743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.516757 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.619596 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.619655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.619668 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.619697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.619711 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.723179 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.723267 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.723294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.723328 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.723358 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.826925 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.826999 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.827010 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.827057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.827068 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.930304 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.930402 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.930418 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.930443 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:16 crc kubenswrapper[4745]: I0121 10:38:16.930458 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:16Z","lastTransitionTime":"2026-01-21T10:38:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.024581 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:12:08.878289012 +0000 UTC Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.033409 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.033485 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.033501 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.033557 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.033574 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.137144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.137186 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.137199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.137219 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.137233 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.240136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.240191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.240203 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.240223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.240237 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.343383 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.343434 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.343444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.343463 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.343477 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.446940 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.447009 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.447023 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.447047 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.447061 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.549054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.549104 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.549119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.549141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.549159 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.652136 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.652188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.652201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.652229 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.652244 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.754824 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.754876 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.754886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.754906 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.754921 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.858063 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.858111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.858120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.858139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.858149 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.961441 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.961508 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.961520 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.961550 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.961560 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:17Z","lastTransitionTime":"2026-01-21T10:38:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.999425 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.999590 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.999637 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:17 crc kubenswrapper[4745]: E0121 10:38:17.999695 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:17 crc kubenswrapper[4745]: E0121 10:38:17.999781 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:17 crc kubenswrapper[4745]: I0121 10:38:17.999817 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:18 crc kubenswrapper[4745]: E0121 10:38:18.000047 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:18 crc kubenswrapper[4745]: E0121 10:38:18.000133 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.025343 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 15:44:08.398728332 +0000 UTC Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.064897 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.064969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.064987 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.065017 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.065046 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.168490 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.168561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.168574 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.168592 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.168604 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.271868 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.271904 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.271915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.271931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.271946 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.375320 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.375405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.375426 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.375454 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.375475 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.478255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.478334 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.478345 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.478368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.478384 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.581071 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.581132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.581144 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.581166 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.581178 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.683988 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.684033 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.684052 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.684073 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.684087 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.787088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.787157 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.787174 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.787199 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.787214 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.889818 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.889877 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.889894 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.889916 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.889930 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.992823 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.992878 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.992888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.992911 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:18 crc kubenswrapper[4745]: I0121 10:38:18.992926 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:18Z","lastTransitionTime":"2026-01-21T10:38:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.000211 4745 scope.go:117] "RemoveContainer" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.026134 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 21:07:51.609273263 +0000 UTC Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.095986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.096025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.096036 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.096054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.096067 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.199168 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.199223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.199234 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.199252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.199268 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.302677 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.302715 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.302727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.302746 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.302761 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.405884 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.405928 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.405939 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.405958 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.405972 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.508201 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.508240 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.508249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.508268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.508279 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.610973 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.611074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.611084 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.611103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.611115 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.623030 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/2.log" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.625558 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.626826 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.649785 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.670499 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.694399 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.713829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.713883 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.713896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.713916 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.713928 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.722640 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.750138 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.768018 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.790995 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.806906 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.817153 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.817205 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.817216 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.817235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.817272 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.825510 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.842380 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.865153 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.886287 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.905209 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.920284 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.920352 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.920366 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.920388 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.920406 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:19Z","lastTransitionTime":"2026-01-21T10:38:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.925346 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.947051 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.963240 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.982131 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:19 crc kubenswrapper[4745]: I0121 10:38:19.999851 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:19.999854 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.000098 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:19.999919 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:19.999931 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.000224 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.000295 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.000366 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.001479 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:19Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.010709 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.010874 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.010844374 +0000 UTC m=+148.471631972 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.010918 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.010956 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.011007 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011116 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011116 4745 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011133 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011150 4745 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011190 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.011178873 +0000 UTC m=+148.471966481 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011210 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.011201263 +0000 UTC m=+148.471988861 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011238 4745 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.011367 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.011336967 +0000 UTC m=+148.472124615 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.022768 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.022826 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.022837 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.022856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.022868 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.026910 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 05:19:40.797305141 +0000 UTC Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.125522 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.125621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.125631 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.125650 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.125662 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.213570 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.213840 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.213890 4745 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.213906 4745 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.213982 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.21395822 +0000 UTC m=+148.674745808 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.228447 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.228502 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.228512 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.228547 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.228560 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.331621 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.331690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.331702 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.331733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.331749 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.435767 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.435812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.435823 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.435839 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.435850 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.544416 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.544871 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.544975 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.545077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.545183 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.630955 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/3.log" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.632356 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/2.log" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.635906 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" exitCode=1 Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.635983 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.636068 4745 scope.go:117] "RemoveContainer" containerID="861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.636721 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:38:20 crc kubenswrapper[4745]: E0121 10:38:20.636911 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.648856 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.648907 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.648919 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.648941 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.648952 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.658415 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.675350 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.687302 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.702708 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.713435 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.727413 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.741335 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.751733 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.751785 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.751797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.751817 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.751831 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.757201 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.774748 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.792006 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.805463 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.825001 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://861e9b2c8303fc566d4e4efdc1eeb2fe2c46a1ecab0d84798d3b75cb2b2489ea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:37:52Z\\\",\\\"message\\\":\\\"] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076852 6300 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0121 10:37:52.076879 6300 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.076903 6300 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0121 10:37:52.078680 6300 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0121 10:37:52.078708 6300 factory.go:656] Stopping watch factory\\\\nI0121 10:37:52.078732 6300 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0121 10:37:52.123109 6300 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0121 10:37:52.123140 6300 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0121 10:37:52.123255 6300 ovnkube.go:599] Stopped ovnkube\\\\nI0121 10:37:52.123294 6300 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0121 10:37:52.123498 6300 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:20Z\\\",\\\"message\\\":\\\"Pod openshift-multus/network-metrics-daemon-px52r in node crc\\\\nI0121 10:38:20.261398 6689 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx\\\\nI0121 10:38:20.261409 6689 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx in node crc\\\\nI0121 10:38:20.261409 6689 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-7855h\\\\nF0121 10:38:20.261423 6689 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:38:20.261433 6689 obj_retry.go:365] Adding new \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:38:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.837439 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.852473 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.854859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.854901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.854913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.854931 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.855245 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.871758 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.886077 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.903120 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.919991 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.958724 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.958762 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.958773 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.958792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:20 crc kubenswrapper[4745]: I0121 10:38:20.958804 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:20Z","lastTransitionTime":"2026-01-21T10:38:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.027893 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 16:32:46.798811007 +0000 UTC Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.040710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.040761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.040772 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.040795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.040807 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.055112 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.058997 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.059065 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.059081 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.059101 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.059114 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.072426 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.078224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.078265 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.078274 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.078291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.078304 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.094065 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.099130 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.099190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.099205 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.099228 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.099246 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.113442 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.118560 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.118610 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.118623 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.118648 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.118661 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.133368 4745 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:21Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8b9b3b63-c3e2-45d5-9e88-04bbcf1452b6\\\",\\\"systemUUID\\\":\\\"fa2b5303-0f9c-4975-b62d-81213d42789a\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.134020 4745 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.136327 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.136461 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.136578 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.136674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.136791 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.239738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.239771 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.239781 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.239800 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.239809 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.342301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.342341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.342351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.342368 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.342377 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.445676 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.446257 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.446275 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.446300 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.446316 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.549701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.549759 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.549770 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.549792 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.549803 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.641416 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/3.log" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.645956 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.646340 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.658044 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.658088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.658096 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.658115 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.658124 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.662301 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.675160 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8abb3db-dbf8-4568-a6dc-c88674d222b1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a177ee2294cf7cf5ce133cf1198071c059ec153bd6d4d6708eac4850d243d3c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wmg59\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-b8tqm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.690119 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37687014-8686-4419-980d-e754a7f7037f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78fa149c85d48eea4dc87ea9932245fe3e7a2216367b5bc3faed4254fc5f6ccd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1db10faf7b8e51d48df9b5c286e6cff8e72190facfd146a46d7810fc595e38e2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a14b35686b9eb1d61fb30f7b438867466f6736217d62d8b4b1a9caa328be1ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://354d5a3af865b7850ec27be104e6957a2a60482f2093c4445274e47c098bd3d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5c7499736c52aba0caaed7dde88f1ebb74b74bc81e18e1e45f509cb0fd7d6fc2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://029066ab3152e121437f0bf695e18ed0ae69dabab6c8621d1daa01451f2ca94e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f67abd9abafaaef945ab081d95da28117587eb71b80af184721aec73064ebb1a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhwll\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-pnnzc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.705031 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.718666 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4376a167-a771-4b79-980d-3409995f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ba5af9f661d7ed1e7778e59eb6963dd69889339bb74a3461f5f480df57148b09\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2417716c54e2fd7917705525d906edf6af1545eae7c09b17bddeb53c00e9b237\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cef6c227b08d8c99c98cc87547994468b26b965b13b523fde8ecfa9c1c455f98\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.733861 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.749075 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0fded7ba49cd4be2dd22bbef711e5fe2911d4b9277d8c40b1e811e542b3146a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4244b5d612c395c3655fafb07853ee919d8e3ab058c05327306083bda5fde2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.761594 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.761634 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.761647 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.761667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.761683 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.763320 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-7855h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bf705b0-6d21-4c31-ab5f-7439aa4607af\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://854df8aeb94893d57a20dbf02828759152759471d5a0ee49f593bf49d28ee030\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2hd6g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-7855h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.778581 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p8q45" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25458900-3da2-4c9d-8463-9acde2add0a6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:38:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:09Z\\\",\\\"message\\\":\\\"2026-01-21T10:37:23+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400\\\\n2026-01-21T10:37:23+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_0822900c-b143-4d7e-af9a-d1168ece3400 to /host/opt/cni/bin/\\\\n2026-01-21T10:37:24Z [verbose] multus-daemon started\\\\n2026-01-21T10:37:24Z [verbose] Readiness Indicator file check\\\\n2026-01-21T10:38:09Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:38:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jhmz4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p8q45\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.792509 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-px52r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"df21a803-8072-4f8f-8f3a-00267f9c3419\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vssx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:34Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-px52r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.812879 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:19Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-21T10:38:20Z\\\",\\\"message\\\":\\\"Pod openshift-multus/network-metrics-daemon-px52r in node crc\\\\nI0121 10:38:20.261398 6689 obj_retry.go:365] Adding new object: *v1.Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx\\\\nI0121 10:38:20.261409 6689 ovn.go:134] Ensuring zone local for Pod openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx in node crc\\\\nI0121 10:38:20.261409 6689 obj_retry.go:303] Retry object setup: *v1.Pod openshift-dns/node-resolver-7855h\\\\nF0121 10:38:20.261423 6689 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:20Z is after 2025-08-24T17:21:41Z]\\\\nI0121 10:38:20.261433 6689 obj_retry.go:365] Adding new \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-21T10:38:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:37:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xf85x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:19Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l7mcj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.827153 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:22Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://65c431daa5fc6ee1ef98cc788e2e3cfde4f71b9ad47a112ef9abb432fa434caf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.838130 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa4c591-892a-4bf2-ad34-e9ed22b30fb3\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab4c6b8f018a3b9a6cf312b8b3a2d14644736b45232de4dcd26408665ed9da1a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4f21d3bf08db2d11638e4b28fd645f2840ea35281a148cf41445355a22e8e879\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.851566 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:17Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3505eb23cdbc3ccebd43116c499f060c28b3dfccedcd2aa9275661d193a5b687\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.863200 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:15Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.865736 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.865763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.865775 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.865795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.865810 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.876939 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kf868" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f4d23707-4f4b-4424-a350-f952443dcc4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34103b4ddce091b9f3cea91776f5549533195df7b4312a517f1d30f7da354189\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbs6w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kf868\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.890373 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4436701b-89b4-411a-acc4-95be1ca116a9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db00bb217bd20915ef15e23791486c65ccf279e234b422688594c136e1510b4b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://909fefbb923408e1067fc5efc9feff51f92b85f3ad8ba5e27e89e673a3ebdd17\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:37:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8p26p\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:37:33Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tnqtx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.905739 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a8bd107d-f4ed-4d69-a372-c3a2e1ca9d59\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:37:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://744e2bf46c8b96741042255b67f9b362b082d98f84136d41b4c7e75c1e928075\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c882c057192253efb3f2945553b94bd8b18b761f5978e52d5379e041608a6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b7d17f7f60dfa3b8bbda3f2752e61c41c13725ea684edb8c3baa8e94550770d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:36:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8291b08a917581b1f257bfe926a563d8f5eb399b8225f3c48954ebe87decf627\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-21T10:36:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-21T10:36:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-21T10:36:56Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-21T10:38:21Z is after 2025-08-24T17:21:41Z" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.969252 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.969294 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.969305 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.969325 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.969336 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:21Z","lastTransitionTime":"2026-01-21T10:38:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.999347 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.999432 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.999348 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.999545 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.999783 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:21 crc kubenswrapper[4745]: I0121 10:38:21.999851 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:21 crc kubenswrapper[4745]: E0121 10:38:21.999826 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:22 crc kubenswrapper[4745]: E0121 10:38:21.999941 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.028298 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 21:36:03.091631019 +0000 UTC Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.072619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.072657 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.072667 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.072683 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.072693 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.175698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.175761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.175774 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.175808 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.175822 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.279158 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.279207 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.279223 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.279242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.279282 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.381981 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.382038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.382054 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.382074 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.382091 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.485309 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.485351 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.485363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.485380 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.485392 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.588162 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.588211 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.588224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.588245 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.588259 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.691646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.691693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.691704 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.691721 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.691731 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.794823 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.794867 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.794881 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.794898 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.794909 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.898173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.898213 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.898224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.898242 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:22 crc kubenswrapper[4745]: I0121 10:38:22.898253 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:22Z","lastTransitionTime":"2026-01-21T10:38:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.001838 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.001896 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.001908 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.001926 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.002223 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.029322 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 16:26:23.207397707 +0000 UTC Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.105474 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.105549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.105559 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.105576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.105609 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.209123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.209206 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.209232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.209253 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.209266 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.312914 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.312960 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.312969 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.312986 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.312998 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.416476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.416551 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.416567 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.416590 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.416626 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.519548 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.519600 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.519612 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.519675 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.519705 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.622819 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.622873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.622885 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.622905 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.622918 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.725821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.725863 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.725873 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.725891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.725903 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.829588 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.829641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.829651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.829674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.829685 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.932741 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.932806 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.932816 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.932845 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.932858 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:23Z","lastTransitionTime":"2026-01-21T10:38:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.999589 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.999696 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:23 crc kubenswrapper[4745]: I0121 10:38:23.999734 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:24 crc kubenswrapper[4745]: E0121 10:38:23.999940 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:23.999966 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:24 crc kubenswrapper[4745]: E0121 10:38:24.000068 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:24 crc kubenswrapper[4745]: E0121 10:38:24.000189 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:24 crc kubenswrapper[4745]: E0121 10:38:24.000723 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.030074 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 12:46:15.206372408 +0000 UTC Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.035729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.035783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.035796 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.035818 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.035832 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.139090 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.139141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.139151 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.139172 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.139185 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.241503 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.241591 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.241602 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.241619 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.241630 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.344697 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.344783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.344799 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.344830 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.344881 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.448336 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.448390 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.448403 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.448429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.448446 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.551654 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.551706 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.551718 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.551738 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.551751 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.653886 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.653938 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.653951 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.653975 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.653991 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.757511 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.757615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.757628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.757651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.757668 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.859753 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.859807 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.859821 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.859842 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.859873 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.962210 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.962263 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.962271 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.962290 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:24 crc kubenswrapper[4745]: I0121 10:38:24.962304 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:24Z","lastTransitionTime":"2026-01-21T10:38:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.030479 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:39:38.660403322 +0000 UTC Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.064913 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.065013 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.065029 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.065057 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.065075 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.168233 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.168347 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.168361 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.168382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.168396 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.272620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.272690 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.272705 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.272726 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.272740 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.375467 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.375561 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.375576 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.375599 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.375613 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.479025 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.479083 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.479095 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.479116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.479131 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.582021 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.582088 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.582098 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.582117 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.582136 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.684569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.684628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.684639 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.684655 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.684670 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.788034 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.788107 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.788119 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.788139 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.788171 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.890790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.890829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.890840 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.890858 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.890872 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.993632 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.993734 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.993748 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.993769 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:25 crc kubenswrapper[4745]: I0121 10:38:25.993800 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:25Z","lastTransitionTime":"2026-01-21T10:38:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:25.999960 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.000030 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.000132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.000030 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:26 crc kubenswrapper[4745]: E0121 10:38:26.000240 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:26 crc kubenswrapper[4745]: E0121 10:38:26.000103 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:26 crc kubenswrapper[4745]: E0121 10:38:26.000434 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:26 crc kubenswrapper[4745]: E0121 10:38:26.000491 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.030700 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 13:54:20.293941767 +0000 UTC Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.095727 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.096148 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.096174 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.096205 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.096223 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.198813 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.198859 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.198870 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.198891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.198906 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.301260 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.301311 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.301323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.301342 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.301356 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.404626 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.404687 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.404699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.404722 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.404736 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.448218 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.44818661 podStartE2EDuration="23.44818661s" podCreationTimestamp="2026-01-21 10:38:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.423363189 +0000 UTC m=+90.884150837" watchObservedRunningTime="2026-01-21 10:38:26.44818661 +0000 UTC m=+90.908974208" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.503705 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kf868" podStartSLOduration=69.503677699 podStartE2EDuration="1m9.503677699s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.48444608 +0000 UTC m=+90.945233688" watchObservedRunningTime="2026-01-21 10:38:26.503677699 +0000 UTC m=+90.964465287" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.503873 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tnqtx" podStartSLOduration=66.503868523 podStartE2EDuration="1m6.503868523s" podCreationTimestamp="2026-01-21 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.50219136 +0000 UTC m=+90.962978948" watchObservedRunningTime="2026-01-21 10:38:26.503868523 +0000 UTC m=+90.964656111" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.506761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.506786 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.506795 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.506810 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.506822 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.550854 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=43.550826955 podStartE2EDuration="43.550826955s" podCreationTimestamp="2026-01-21 10:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.533421823 +0000 UTC m=+90.994209421" watchObservedRunningTime="2026-01-21 10:38:26.550826955 +0000 UTC m=+91.011614553" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.572719 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podStartSLOduration=69.572668789 podStartE2EDuration="1m9.572668789s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.572452104 +0000 UTC m=+91.033239702" watchObservedRunningTime="2026-01-21 10:38:26.572668789 +0000 UTC m=+91.033456387" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.595944 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-pnnzc" podStartSLOduration=68.595918989 podStartE2EDuration="1m8.595918989s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.595579831 +0000 UTC m=+91.056367449" watchObservedRunningTime="2026-01-21 10:38:26.595918989 +0000 UTC m=+91.056706587" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.609191 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.609226 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.609235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.609251 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.609262 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.641178 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=71.641160399 podStartE2EDuration="1m11.641160399s" podCreationTimestamp="2026-01-21 10:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.61917674 +0000 UTC m=+91.079964338" watchObservedRunningTime="2026-01-21 10:38:26.641160399 +0000 UTC m=+91.101947997" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.658651 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.658621322 podStartE2EDuration="1m8.658621322s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.641798685 +0000 UTC m=+91.102586293" watchObservedRunningTime="2026-01-21 10:38:26.658621322 +0000 UTC m=+91.119408920" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.704770 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-7855h" podStartSLOduration=69.704747922 podStartE2EDuration="1m9.704747922s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.689450034 +0000 UTC m=+91.150237632" watchObservedRunningTime="2026-01-21 10:38:26.704747922 +0000 UTC m=+91.165535520" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.711549 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.711585 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.711596 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.711615 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.711627 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.718135 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-p8q45" podStartSLOduration=68.718120772 podStartE2EDuration="1m8.718120772s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:26.70543976 +0000 UTC m=+91.166227358" watchObservedRunningTime="2026-01-21 10:38:26.718120772 +0000 UTC m=+91.178908360" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.814641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.814689 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.814701 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.814720 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.814733 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.917915 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.917974 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.917991 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.918014 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:26 crc kubenswrapper[4745]: I0121 10:38:26.918031 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:26Z","lastTransitionTime":"2026-01-21T10:38:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.021188 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.021232 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.021247 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.021269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.021283 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.031594 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:07:03.25913915 +0000 UTC Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.124355 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.124407 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.124421 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.124442 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.124454 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.228330 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.228399 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.228411 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.228437 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.228452 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.332255 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.332306 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.332323 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.332344 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.332358 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.435046 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.435120 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.435138 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.435165 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.435184 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.541684 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.541764 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.541783 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.541815 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.541836 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.645579 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.645646 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.645665 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.645699 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.645719 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.750526 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.750743 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.750766 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.750797 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.750815 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.854814 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.854888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.854909 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.854946 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.854969 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.957693 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.957752 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.957763 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.957788 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:27 crc kubenswrapper[4745]: I0121 10:38:27.957802 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:27Z","lastTransitionTime":"2026-01-21T10:38:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.000028 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.000100 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:28 crc kubenswrapper[4745]: E0121 10:38:28.000160 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.000159 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:28 crc kubenswrapper[4745]: E0121 10:38:28.000265 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.000277 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:28 crc kubenswrapper[4745]: E0121 10:38:28.000557 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:28 crc kubenswrapper[4745]: E0121 10:38:28.000617 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.032283 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 13:16:36.515204712 +0000 UTC Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.060813 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.060888 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.060901 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.060924 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.060938 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.164077 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.164123 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.164132 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.164149 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.164162 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.266457 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.266515 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.266526 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.266569 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.266584 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.369492 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.369605 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.369624 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.369653 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.369676 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.471879 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.471942 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.471967 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.472038 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.472066 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.575790 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.575860 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.575891 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.575920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.575933 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.679761 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.679812 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.679829 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.679851 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.679867 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.782298 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.782332 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.782341 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.782358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.782370 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.885417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.885464 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.885476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.885495 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.885507 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.988382 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.988429 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.988444 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.988465 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:28 crc kubenswrapper[4745]: I0121 10:38:28.988477 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:28Z","lastTransitionTime":"2026-01-21T10:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.033069 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 23:24:37.742845508 +0000 UTC Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.091803 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.091893 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.091912 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.091934 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.091949 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.194568 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.194628 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.194643 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.194709 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.194744 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.298008 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.298103 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.298113 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.298131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.298143 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.401358 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.401405 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.401417 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.401438 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.401460 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.504651 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.504698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.504710 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.504729 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.504741 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.607608 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.607659 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.607674 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.607696 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.607707 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.710312 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.710363 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.710374 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.710393 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.710405 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.813413 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.813466 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.813476 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.813494 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.813504 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.916164 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.916235 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.916249 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.916291 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.916304 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:29Z","lastTransitionTime":"2026-01-21T10:38:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.999650 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.999708 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:29 crc kubenswrapper[4745]: I0121 10:38:29.999705 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:29.999755 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:30 crc kubenswrapper[4745]: E0121 10:38:29.999844 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:30 crc kubenswrapper[4745]: E0121 10:38:30.000000 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:30 crc kubenswrapper[4745]: E0121 10:38:30.000288 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:30 crc kubenswrapper[4745]: E0121 10:38:30.000366 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.018994 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.019031 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.019039 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.019055 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.019064 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.033847 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 04:39:45.475222899 +0000 UTC Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.122554 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.123161 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.123173 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.123198 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.123213 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.226270 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.226307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.226316 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.226333 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.226344 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.329224 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.329268 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.329279 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.329301 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.329315 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.432022 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.432064 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.432075 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.432093 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.432104 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.535141 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.535184 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.535193 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.535212 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.535221 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.638130 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.638186 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.638200 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.638222 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.638236 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.741244 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.741288 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.741307 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.741326 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.741337 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.845050 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.845100 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.845111 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.845131 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.845145 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.948641 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.948698 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.948711 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.948732 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:30 crc kubenswrapper[4745]: I0121 10:38:30.948745 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:30Z","lastTransitionTime":"2026-01-21T10:38:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.034514 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 06:28:17.690360606 +0000 UTC Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.052190 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.052256 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.052269 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.052296 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.052309 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:31Z","lastTransitionTime":"2026-01-21T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.156920 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.156959 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.156971 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.156993 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.157005 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:31Z","lastTransitionTime":"2026-01-21T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.260350 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.260400 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.260410 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.260431 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.260443 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:31Z","lastTransitionTime":"2026-01-21T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.262061 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.262116 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.262124 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.262143 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.262154 4745 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-21T10:38:31Z","lastTransitionTime":"2026-01-21T10:38:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.320802 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc"] Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.321297 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.323317 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.323901 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.324661 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.333792 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.458326 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.458387 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93686c-cc70-471c-85d6-7c2e340cbf65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.458412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.458442 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb93686c-cc70-471c-85d6-7c2e340cbf65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.458466 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb93686c-cc70-471c-85d6-7c2e340cbf65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.559651 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.559696 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93686c-cc70-471c-85d6-7c2e340cbf65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.559718 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.559749 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb93686c-cc70-471c-85d6-7c2e340cbf65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.559786 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb93686c-cc70-471c-85d6-7c2e340cbf65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.560693 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/eb93686c-cc70-471c-85d6-7c2e340cbf65-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.560751 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.561740 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/eb93686c-cc70-471c-85d6-7c2e340cbf65-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.573609 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb93686c-cc70-471c-85d6-7c2e340cbf65-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.579259 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/eb93686c-cc70-471c-85d6-7c2e340cbf65-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8h8sc\" (UID: \"eb93686c-cc70-471c-85d6-7c2e340cbf65\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.639023 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" Jan 21 10:38:31 crc kubenswrapper[4745]: I0121 10:38:31.688223 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" event={"ID":"eb93686c-cc70-471c-85d6-7c2e340cbf65","Type":"ContainerStarted","Data":"44056c4c46b1fcb05912002eb35ad7429caa34aa5513d11124a833050a5d7880"} Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:31.999868 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:32 crc kubenswrapper[4745]: E0121 10:38:32.000036 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.000669 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:32 crc kubenswrapper[4745]: E0121 10:38:32.000743 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:31.999879 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.000923 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:32 crc kubenswrapper[4745]: E0121 10:38:32.000974 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:32 crc kubenswrapper[4745]: E0121 10:38:32.001122 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.019414 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.035738 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:21:27.247029261 +0000 UTC Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.035847 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.052704 4745 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.790366 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" event={"ID":"eb93686c-cc70-471c-85d6-7c2e340cbf65","Type":"ContainerStarted","Data":"46b4b8e7aeaf7d1a5c97a32320d99cfc1ac3504402ea3532919ef9e4bda08b95"} Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.880454 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=0.880424865 podStartE2EDuration="880.424865ms" podCreationTimestamp="2026-01-21 10:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:32.878143197 +0000 UTC m=+97.338930795" watchObservedRunningTime="2026-01-21 10:38:32.880424865 +0000 UTC m=+97.341212463" Jan 21 10:38:32 crc kubenswrapper[4745]: I0121 10:38:32.881588 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8h8sc" podStartSLOduration=74.881579264 podStartE2EDuration="1m14.881579264s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:38:32.823688785 +0000 UTC m=+97.284476383" watchObservedRunningTime="2026-01-21 10:38:32.881579264 +0000 UTC m=+97.342366862" Jan 21 10:38:33 crc kubenswrapper[4745]: I0121 10:38:33.999789 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:34 crc kubenswrapper[4745]: I0121 10:38:33.999907 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:34 crc kubenswrapper[4745]: I0121 10:38:33.999985 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:34 crc kubenswrapper[4745]: E0121 10:38:34.000176 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:34 crc kubenswrapper[4745]: I0121 10:38:34.000459 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:34 crc kubenswrapper[4745]: E0121 10:38:34.000565 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:34 crc kubenswrapper[4745]: E0121 10:38:34.000711 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:34 crc kubenswrapper[4745]: E0121 10:38:34.000829 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:36 crc kubenswrapper[4745]: I0121 10:38:36.020432 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:36 crc kubenswrapper[4745]: I0121 10:38:36.023199 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:36 crc kubenswrapper[4745]: I0121 10:38:36.023428 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:36 crc kubenswrapper[4745]: E0121 10:38:36.023109 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:36 crc kubenswrapper[4745]: E0121 10:38:36.023616 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:36 crc kubenswrapper[4745]: E0121 10:38:36.023758 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:36 crc kubenswrapper[4745]: I0121 10:38:36.024031 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:36 crc kubenswrapper[4745]: I0121 10:38:36.025317 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:38:36 crc kubenswrapper[4745]: E0121 10:38:36.025480 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:38:36 crc kubenswrapper[4745]: E0121 10:38:36.024980 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:37 crc kubenswrapper[4745]: I0121 10:38:37.999322 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:38 crc kubenswrapper[4745]: I0121 10:38:37.999419 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:37.999503 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:38 crc kubenswrapper[4745]: I0121 10:38:37.999329 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:37.999598 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:37.999681 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:38 crc kubenswrapper[4745]: I0121 10:38:38.000225 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:38.000457 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:38 crc kubenswrapper[4745]: I0121 10:38:38.344956 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:38.345214 4745 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:38:38 crc kubenswrapper[4745]: E0121 10:38:38.345646 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs podName:df21a803-8072-4f8f-8f3a-00267f9c3419 nodeName:}" failed. No retries permitted until 2026-01-21 10:39:42.345615823 +0000 UTC m=+166.806403421 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs") pod "network-metrics-daemon-px52r" (UID: "df21a803-8072-4f8f-8f3a-00267f9c3419") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 21 10:38:40 crc kubenswrapper[4745]: I0121 10:38:39.999970 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:40 crc kubenswrapper[4745]: I0121 10:38:40.000045 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:40 crc kubenswrapper[4745]: I0121 10:38:40.000080 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:40 crc kubenswrapper[4745]: E0121 10:38:40.000161 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:40 crc kubenswrapper[4745]: I0121 10:38:40.000004 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:40 crc kubenswrapper[4745]: E0121 10:38:40.000324 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:40 crc kubenswrapper[4745]: E0121 10:38:40.000407 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:40 crc kubenswrapper[4745]: E0121 10:38:40.000460 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:41 crc kubenswrapper[4745]: I0121 10:38:41.999371 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:42 crc kubenswrapper[4745]: I0121 10:38:41.999371 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:42 crc kubenswrapper[4745]: E0121 10:38:41.999907 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:42 crc kubenswrapper[4745]: I0121 10:38:41.999429 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:42 crc kubenswrapper[4745]: I0121 10:38:41.999393 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:42 crc kubenswrapper[4745]: E0121 10:38:42.000008 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:42 crc kubenswrapper[4745]: E0121 10:38:42.000083 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:42 crc kubenswrapper[4745]: E0121 10:38:42.000198 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:44 crc kubenswrapper[4745]: I0121 10:38:43.999969 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:44 crc kubenswrapper[4745]: I0121 10:38:44.000068 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:44 crc kubenswrapper[4745]: E0121 10:38:44.000127 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:44 crc kubenswrapper[4745]: I0121 10:38:43.999969 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:44 crc kubenswrapper[4745]: I0121 10:38:43.999970 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:44 crc kubenswrapper[4745]: E0121 10:38:44.000255 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:44 crc kubenswrapper[4745]: E0121 10:38:44.000332 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:44 crc kubenswrapper[4745]: E0121 10:38:44.000419 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:46 crc kubenswrapper[4745]: I0121 10:38:46.000064 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:46 crc kubenswrapper[4745]: I0121 10:38:46.000188 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:46 crc kubenswrapper[4745]: I0121 10:38:46.000192 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:46 crc kubenswrapper[4745]: I0121 10:38:46.000229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:46 crc kubenswrapper[4745]: E0121 10:38:46.001207 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:46 crc kubenswrapper[4745]: E0121 10:38:46.001310 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:46 crc kubenswrapper[4745]: E0121 10:38:46.001379 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:46 crc kubenswrapper[4745]: E0121 10:38:46.001463 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:48 crc kubenswrapper[4745]: I0121 10:38:48.000007 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:48 crc kubenswrapper[4745]: I0121 10:38:48.000060 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:48 crc kubenswrapper[4745]: E0121 10:38:48.000151 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:48 crc kubenswrapper[4745]: I0121 10:38:48.000205 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:48 crc kubenswrapper[4745]: E0121 10:38:48.000296 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:48 crc kubenswrapper[4745]: I0121 10:38:48.000007 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:48 crc kubenswrapper[4745]: E0121 10:38:48.000376 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:48 crc kubenswrapper[4745]: E0121 10:38:48.000435 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:50 crc kubenswrapper[4745]: I0121 10:38:49.999881 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:50 crc kubenswrapper[4745]: I0121 10:38:49.999966 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:50 crc kubenswrapper[4745]: E0121 10:38:50.000083 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:50 crc kubenswrapper[4745]: I0121 10:38:50.000184 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:50 crc kubenswrapper[4745]: I0121 10:38:49.999896 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:50 crc kubenswrapper[4745]: E0121 10:38:50.000339 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:50 crc kubenswrapper[4745]: E0121 10:38:50.000364 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:50 crc kubenswrapper[4745]: E0121 10:38:50.000439 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:51 crc kubenswrapper[4745]: I0121 10:38:51.002144 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:38:51 crc kubenswrapper[4745]: E0121 10:38:51.002470 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l7mcj_openshift-ovn-kubernetes(04dff8d4-15bb-4f8e-b71a-bb104f6de3ed)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" Jan 21 10:38:52 crc kubenswrapper[4745]: I0121 10:38:52.000478 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:52 crc kubenswrapper[4745]: I0121 10:38:52.000566 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:52 crc kubenswrapper[4745]: I0121 10:38:52.000501 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:52 crc kubenswrapper[4745]: E0121 10:38:52.000792 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:52 crc kubenswrapper[4745]: E0121 10:38:52.000961 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:52 crc kubenswrapper[4745]: E0121 10:38:52.001154 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:52 crc kubenswrapper[4745]: I0121 10:38:52.001681 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:52 crc kubenswrapper[4745]: E0121 10:38:52.001991 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:53 crc kubenswrapper[4745]: I0121 10:38:53.999651 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:54 crc kubenswrapper[4745]: I0121 10:38:53.999763 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:54 crc kubenswrapper[4745]: I0121 10:38:54.000850 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:54 crc kubenswrapper[4745]: E0121 10:38:54.001261 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:54 crc kubenswrapper[4745]: I0121 10:38:54.001713 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:54 crc kubenswrapper[4745]: E0121 10:38:54.001943 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:54 crc kubenswrapper[4745]: E0121 10:38:54.002107 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:54 crc kubenswrapper[4745]: E0121 10:38:54.002299 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:55 crc kubenswrapper[4745]: E0121 10:38:55.938082 4745 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 21 10:38:55 crc kubenswrapper[4745]: I0121 10:38:55.999820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:55.999910 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.001092 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.001133 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.001201 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.001265 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.001343 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.001391 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.097469 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.884429 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/1.log" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.884936 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/0.log" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.884988 4745 generic.go:334] "Generic (PLEG): container finished" podID="25458900-3da2-4c9d-8463-9acde2add0a6" containerID="714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7" exitCode=1 Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.885025 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerDied","Data":"714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7"} Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.885070 4745 scope.go:117] "RemoveContainer" containerID="099bcddcbc5e17dbb0dee01693ca4629fcf925e4e8371a8e414868aefb94f1af" Jan 21 10:38:56 crc kubenswrapper[4745]: I0121 10:38:56.885492 4745 scope.go:117] "RemoveContainer" containerID="714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7" Jan 21 10:38:56 crc kubenswrapper[4745]: E0121 10:38:56.885746 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-p8q45_openshift-multus(25458900-3da2-4c9d-8463-9acde2add0a6)\"" pod="openshift-multus/multus-p8q45" podUID="25458900-3da2-4c9d-8463-9acde2add0a6" Jan 21 10:38:57 crc kubenswrapper[4745]: I0121 10:38:57.890927 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/1.log" Jan 21 10:38:57 crc kubenswrapper[4745]: I0121 10:38:57.999483 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:38:57 crc kubenswrapper[4745]: I0121 10:38:57.999564 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:38:57 crc kubenswrapper[4745]: E0121 10:38:57.999667 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:38:57 crc kubenswrapper[4745]: I0121 10:38:57.999683 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:38:57 crc kubenswrapper[4745]: E0121 10:38:57.999803 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:38:57 crc kubenswrapper[4745]: E0121 10:38:57.999902 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:38:58 crc kubenswrapper[4745]: I0121 10:38:57.999958 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:38:58 crc kubenswrapper[4745]: E0121 10:38:58.000013 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:38:59 crc kubenswrapper[4745]: I0121 10:38:59.999826 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:00 crc kubenswrapper[4745]: E0121 10:39:00.000063 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:00 crc kubenswrapper[4745]: I0121 10:39:00.000461 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:00 crc kubenswrapper[4745]: E0121 10:39:00.000857 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:00 crc kubenswrapper[4745]: I0121 10:39:00.001328 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:00 crc kubenswrapper[4745]: E0121 10:39:00.001510 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:00 crc kubenswrapper[4745]: I0121 10:39:00.001797 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:00 crc kubenswrapper[4745]: E0121 10:39:00.001910 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:01 crc kubenswrapper[4745]: E0121 10:39:01.099656 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 10:39:02 crc kubenswrapper[4745]: I0121 10:39:01.999854 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:02 crc kubenswrapper[4745]: E0121 10:39:02.000401 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:02 crc kubenswrapper[4745]: I0121 10:39:01.999905 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:02 crc kubenswrapper[4745]: I0121 10:39:01.999914 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:02 crc kubenswrapper[4745]: E0121 10:39:02.000610 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:02 crc kubenswrapper[4745]: I0121 10:39:01.999859 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:02 crc kubenswrapper[4745]: E0121 10:39:02.000736 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:02 crc kubenswrapper[4745]: E0121 10:39:02.001014 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:04 crc kubenswrapper[4745]: I0121 10:39:04.000141 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:04 crc kubenswrapper[4745]: I0121 10:39:04.000234 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:04 crc kubenswrapper[4745]: I0121 10:39:04.000240 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:04 crc kubenswrapper[4745]: E0121 10:39:04.000335 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:04 crc kubenswrapper[4745]: E0121 10:39:04.000420 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:04 crc kubenswrapper[4745]: I0121 10:39:04.000437 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:04 crc kubenswrapper[4745]: E0121 10:39:04.000606 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:04 crc kubenswrapper[4745]: E0121 10:39:04.000732 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.001033 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.923045 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/3.log" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.926476 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerStarted","Data":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.927672 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.961990 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podStartSLOduration=106.961961156 podStartE2EDuration="1m46.961961156s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:05.961084964 +0000 UTC m=+130.421872582" watchObservedRunningTime="2026-01-21 10:39:05.961961156 +0000 UTC m=+130.422748764" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.999649 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.999675 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:05 crc kubenswrapper[4745]: I0121 10:39:05.999680 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:06 crc kubenswrapper[4745]: I0121 10:39:06.000143 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.000735 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.000848 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.001010 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.001127 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.100814 4745 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 21 10:39:06 crc kubenswrapper[4745]: I0121 10:39:06.569128 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-px52r"] Jan 21 10:39:06 crc kubenswrapper[4745]: I0121 10:39:06.930512 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:06 crc kubenswrapper[4745]: E0121 10:39:06.931225 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:07 crc kubenswrapper[4745]: I0121 10:39:07.999574 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:08 crc kubenswrapper[4745]: I0121 10:39:07.999843 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:08 crc kubenswrapper[4745]: E0121 10:39:07.999889 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:08 crc kubenswrapper[4745]: I0121 10:39:08.000057 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:08 crc kubenswrapper[4745]: E0121 10:39:08.000097 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:08 crc kubenswrapper[4745]: I0121 10:39:08.000733 4745 scope.go:117] "RemoveContainer" containerID="714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7" Jan 21 10:39:08 crc kubenswrapper[4745]: E0121 10:39:08.000745 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:08 crc kubenswrapper[4745]: I0121 10:39:08.940336 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/1.log" Jan 21 10:39:08 crc kubenswrapper[4745]: I0121 10:39:08.940413 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerStarted","Data":"655ed50b4ec230b78b2634d5bb83e158e7df4aea82278fb856a0f0f490e5d178"} Jan 21 10:39:09 crc kubenswrapper[4745]: I0121 10:39:08.999860 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:09 crc kubenswrapper[4745]: E0121 10:39:09.000044 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:09 crc kubenswrapper[4745]: I0121 10:39:09.999247 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:09 crc kubenswrapper[4745]: I0121 10:39:09.999362 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:09 crc kubenswrapper[4745]: E0121 10:39:09.999429 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 21 10:39:09 crc kubenswrapper[4745]: I0121 10:39:09.999473 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:09 crc kubenswrapper[4745]: E0121 10:39:09.999664 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 21 10:39:09 crc kubenswrapper[4745]: E0121 10:39:09.999710 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:10.999896 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:11 crc kubenswrapper[4745]: E0121 10:39:11.000093 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-px52r" podUID="df21a803-8072-4f8f-8f3a-00267f9c3419" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.923620 4745 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.974919 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2"] Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.975573 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.980669 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.981591 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.981756 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.981787 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.981956 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.982786 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.983465 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.984345 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.984910 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.988217 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.988418 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.988470 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.989012 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.989057 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.989133 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.989463 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.990310 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mlbwr"] Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.991115 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.991415 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh"] Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.991808 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.992175 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.992433 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.992855 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.992877 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.992929 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.993820 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.994676 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:39:11 crc kubenswrapper[4745]: I0121 10:39:11.994691 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:11.999975 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.000016 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:11.999975 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.011725 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.012208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.013932 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.017373 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.020270 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.020302 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.020444 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.020581 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.020896 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.021356 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dfzgf"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.021975 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.024615 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.024871 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.025131 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.025848 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.026041 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.026579 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.026729 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.026991 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027425 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027503 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027655 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027695 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027794 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027854 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027881 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.027920 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.028279 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.028312 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.028355 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.034020 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.036142 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.039820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.041185 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.065981 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.066201 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.068847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.069404 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.069447 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.070338 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.071159 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.073371 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.073501 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.074292 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.074404 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.074676 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075085 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prtz7\" (UniqueName: \"kubernetes.io/projected/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-kube-api-access-prtz7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075126 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shv7w\" (UniqueName: \"kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075173 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075192 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knxkf\" (UniqueName: \"kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075215 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075235 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075253 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075273 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075296 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075324 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkwst\" (UniqueName: \"kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075345 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-serving-cert\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075362 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-config\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075381 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-client\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075431 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075448 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-config\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075464 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075494 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075579 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075602 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075626 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-encryption-config\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075648 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4f4528-cd42-4baf-92ae-b29df2f83979-serving-cert\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075681 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075704 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075726 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-service-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075745 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075767 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075788 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075811 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f5958e-78cf-428c-b9c0-abae011b2de4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075834 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075862 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt29p\" (UniqueName: \"kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075887 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-dir\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075908 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075929 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075955 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-images\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.075981 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076003 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cabd47c-351d-4858-bc6f-a158170d9e9a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076025 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-policies\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076047 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjmw9\" (UniqueName: \"kubernetes.io/projected/9b4f4528-cd42-4baf-92ae-b29df2f83979-kube-api-access-jjmw9\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076066 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076089 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzxs9\" (UniqueName: \"kubernetes.io/projected/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-kube-api-access-jzxs9\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076110 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076128 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cabd47c-351d-4858-bc6f-a158170d9e9a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076151 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076182 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4hp\" (UniqueName: \"kubernetes.io/projected/b8f5958e-78cf-428c-b9c0-abae011b2de4-kube-api-access-ll4hp\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076204 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076225 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076249 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076285 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076305 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076344 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfdh\" (UniqueName: \"kubernetes.io/projected/2228e29c-3b0e-4358-91a2-dcf925981bda-kube-api-access-hjfdh\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076365 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076390 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.076435 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7c7\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-kube-api-access-bv7c7\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.077717 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.082349 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.082382 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-gwvtn"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.082867 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.082999 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083060 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083120 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083206 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083308 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083337 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083595 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083704 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.083769 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.084918 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.085314 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.085561 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.085682 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.085790 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.085933 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.086987 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.089490 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.089671 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.090502 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.090701 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7p28"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.092089 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.092316 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9dn2q"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.092360 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.092996 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.093467 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.093699 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.094256 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.094702 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.094909 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.098386 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5qn2m"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.099425 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.106378 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.117030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.119616 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.119676 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.133752 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.135179 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.157841 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202326 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4hp\" (UniqueName: \"kubernetes.io/projected/b8f5958e-78cf-428c-b9c0-abae011b2de4-kube-api-access-ll4hp\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202384 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202418 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202447 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202470 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202505 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202593 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfdh\" (UniqueName: \"kubernetes.io/projected/2228e29c-3b0e-4358-91a2-dcf925981bda-kube-api-access-hjfdh\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202625 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202650 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202675 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202703 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202728 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202751 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bv7c7\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-kube-api-access-bv7c7\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202780 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prtz7\" (UniqueName: \"kubernetes.io/projected/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-kube-api-access-prtz7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202805 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shv7w\" (UniqueName: \"kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202833 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202859 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202884 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knxkf\" (UniqueName: \"kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202906 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202928 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202953 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.202981 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203005 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203030 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkwst\" (UniqueName: \"kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203069 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-serving-cert\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203094 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-config\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203114 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203156 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-client\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203173 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203190 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-config\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203217 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203269 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203290 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-encryption-config\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203337 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4f4528-cd42-4baf-92ae-b29df2f83979-serving-cert\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203358 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203352 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mlbwr"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203376 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203494 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203517 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-service-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203566 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203596 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203636 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f5958e-78cf-428c-b9c0-abae011b2de4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203699 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203733 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt29p\" (UniqueName: \"kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203758 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-dir\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203776 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203794 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203817 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-images\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203843 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203863 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cabd47c-351d-4858-bc6f-a158170d9e9a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203885 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-policies\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203902 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjmw9\" (UniqueName: \"kubernetes.io/projected/9b4f4528-cd42-4baf-92ae-b29df2f83979-kube-api-access-jjmw9\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203923 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203961 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzxs9\" (UniqueName: \"kubernetes.io/projected/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-kube-api-access-jzxs9\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.203986 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.204009 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cabd47c-351d-4858-bc6f-a158170d9e9a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.204187 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.206013 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.206086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3cabd47c-351d-4858-bc6f-a158170d9e9a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.206283 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.207073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.207702 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-service-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.215264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.216103 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.216779 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.219372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-images\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.221757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.223449 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-dir\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.225923 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.226609 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.226931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.227002 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.227655 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-audit-policies\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.227720 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.228148 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2228e29c-3b0e-4358-91a2-dcf925981bda-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.228970 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.229228 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.234011 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-etcd-client\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.234367 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.235091 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.235668 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.236488 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-tk5j9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.236641 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.236878 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.237407 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.237602 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.237769 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.237960 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.238504 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.238911 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.239330 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9b4f4528-cd42-4baf-92ae-b29df2f83979-config\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.239434 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.239349 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.239912 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.240175 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.240736 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.249437 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.249474 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.249984 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-serving-cert\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.250217 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.251055 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8f5958e-78cf-428c-b9c0-abae011b2de4-config\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.251640 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.257629 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.258007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.259334 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b8f5958e-78cf-428c-b9c0-abae011b2de4-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.260005 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.260473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/3cabd47c-351d-4858-bc6f-a158170d9e9a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262127 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9b4f4528-cd42-4baf-92ae-b29df2f83979-serving-cert\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.264750 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.240786 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261128 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261370 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261567 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261613 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.275543 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261654 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261725 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.275852 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261760 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.276187 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dhkkd"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.261829 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262126 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.276582 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.276732 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262212 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262261 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262316 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262367 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262609 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262645 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262695 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.262718 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.279244 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.272173 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.272285 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.272388 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.272504 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.272553 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.289216 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.290519 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.290560 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.291435 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-serving-cert\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.292034 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.295145 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.299800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.300092 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.300280 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.300992 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2228e29c-3b0e-4358-91a2-dcf925981bda-encryption-config\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.304546 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305085 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a81a913-b0f8-44c1-b1ab-dbeab680f536-metrics-tls\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305142 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-trusted-ca\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305210 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dchxt\" (UniqueName: \"kubernetes.io/projected/9a81a913-b0f8-44c1-b1ab-dbeab680f536-kube-api-access-dchxt\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305237 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89359b3-9f5c-4d38-8bf8-eb833252867b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305266 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-config\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305280 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv2j2\" (UniqueName: \"kubernetes.io/projected/a89359b3-9f5c-4d38-8bf8-eb833252867b-kube-api-access-nv2j2\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-auth-proxy-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305318 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305333 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305349 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-client\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305379 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-serving-cert\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305411 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3a6-f097-4220-a03d-a34e2e70027a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305438 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-image-import-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305561 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28428682-3f1f-4077-887e-f1570b385a8c-serving-cert\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305642 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccr4b\" (UniqueName: \"kubernetes.io/projected/28428682-3f1f-4077-887e-f1570b385a8c-kube-api-access-ccr4b\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305746 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj29f\" (UniqueName: \"kubernetes.io/projected/c531fa6e-de28-476b-8b34-aca8b0e2cc56-kube-api-access-vj29f\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305846 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmkm\" (UniqueName: \"kubernetes.io/projected/fe3c7d57-12a7-426c-8c02-fe7f24949bae-kube-api-access-5tmkm\") pod \"downloads-7954f5f757-gwvtn\" (UID: \"fe3c7d57-12a7-426c-8c02-fe7f24949bae\") " pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305866 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g6qx\" (UniqueName: \"kubernetes.io/projected/c1f4c3a6-f097-4220-a03d-a34e2e70027a-kube-api-access-6g6qx\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.305965 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-encryption-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306007 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306051 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xxps\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-kube-api-access-5xxps\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306251 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306307 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306521 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-trusted-ca\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306568 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a89359b3-9f5c-4d38-8bf8-eb833252867b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306614 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-machine-approver-tls\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306627 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306660 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-metrics-tls\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4x2c\" (UniqueName: \"kubernetes.io/projected/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-kube-api-access-f4x2c\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306709 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-node-pullsecrets\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.306727 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit-dir\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.307082 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.307369 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.307797 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.308123 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.308340 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.310330 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.311077 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.321923 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.322799 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.325138 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.326273 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.333624 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.334818 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7r9dl"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.335226 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.351141 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.351203 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.353314 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.354961 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.362167 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.389350 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.398572 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fmgwt"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408571 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dchxt\" (UniqueName: \"kubernetes.io/projected/9a81a913-b0f8-44c1-b1ab-dbeab680f536-kube-api-access-dchxt\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408617 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89359b3-9f5c-4d38-8bf8-eb833252867b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408673 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-config\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv2j2\" (UniqueName: \"kubernetes.io/projected/a89359b3-9f5c-4d38-8bf8-eb833252867b-kube-api-access-nv2j2\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408721 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-auth-proxy-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.408767 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.410997 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.411125 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.414951 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-auth-proxy-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.415640 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a89359b3-9f5c-4d38-8bf8-eb833252867b-config\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.415648 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.415684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-config\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416296 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-serving-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416300 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9dn2q"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416368 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-9rsxp"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416386 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416420 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-client\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416430 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416445 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca87c694-02c0-4b6f-a4f0-5fd16777f406-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416487 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-serving-cert\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416510 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3a6-f097-4220-a03d-a34e2e70027a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416554 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416576 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ea889c30-b820-47fa-8232-f96ed56ba8e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416608 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-config\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416637 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-image-import-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416662 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vj29f\" (UniqueName: \"kubernetes.io/projected/c531fa6e-de28-476b-8b34-aca8b0e2cc56-kube-api-access-vj29f\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416682 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca87c694-02c0-4b6f-a4f0-5fd16777f406-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416708 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28428682-3f1f-4077-887e-f1570b385a8c-serving-cert\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416733 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccr4b\" (UniqueName: \"kubernetes.io/projected/28428682-3f1f-4077-887e-f1570b385a8c-kube-api-access-ccr4b\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416774 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmkm\" (UniqueName: \"kubernetes.io/projected/fe3c7d57-12a7-426c-8c02-fe7f24949bae-kube-api-access-5tmkm\") pod \"downloads-7954f5f757-gwvtn\" (UID: \"fe3c7d57-12a7-426c-8c02-fe7f24949bae\") " pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416810 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-encryption-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416832 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6g6qx\" (UniqueName: \"kubernetes.io/projected/c1f4c3a6-f097-4220-a03d-a34e2e70027a-kube-api-access-6g6qx\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416857 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416877 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xxps\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-kube-api-access-5xxps\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416909 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c53e15f-0e61-49a2-bb11-8b39af387be9-service-ca-bundle\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416929 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f88dc0-76ba-49bd-8d25-87454497d61d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416965 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.416986 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a89359b3-9f5c-4d38-8bf8-eb833252867b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417005 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71f88dc0-76ba-49bd-8d25-87454497d61d-proxy-tls\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417026 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-trusted-ca\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417047 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-metrics-tls\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417065 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-machine-approver-tls\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417094 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkspc\" (UniqueName: \"kubernetes.io/projected/5c53e15f-0e61-49a2-bb11-8b39af387be9-kube-api-access-tkspc\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417133 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4x2c\" (UniqueName: \"kubernetes.io/projected/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-kube-api-access-f4x2c\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417152 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl4j9\" (UniqueName: \"kubernetes.io/projected/71f88dc0-76ba-49bd-8d25-87454497d61d-kube-api-access-xl4j9\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417171 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-node-pullsecrets\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417192 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417224 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit-dir\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417245 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-stats-auth\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417264 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-default-certificate\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417281 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417300 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb644\" (UniqueName: \"kubernetes.io/projected/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-kube-api-access-qb644\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417317 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca87c694-02c0-4b6f-a4f0-5fd16777f406-config\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417339 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a81a913-b0f8-44c1-b1ab-dbeab680f536-metrics-tls\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-trusted-ca\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417382 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417407 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-metrics-certs\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417433 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtzlq\" (UniqueName: \"kubernetes.io/projected/ea889c30-b820-47fa-8232-f96ed56ba8e1-kube-api-access-rtzlq\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417442 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.417524 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.418546 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.419287 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.419381 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.420676 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-etcd-client\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.421763 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28428682-3f1f-4077-887e-f1570b385a8c-trusted-ca\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.422019 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-image-import-ca\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.424939 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-encryption-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.425211 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/c1f4c3a6-f097-4220-a03d-a34e2e70027a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.428121 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-trusted-ca\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.428920 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-machine-approver-tls\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.429087 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-node-pullsecrets\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.429123 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit-dir\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.429482 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-config\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.431098 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.432322 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28428682-3f1f-4077-887e-f1570b385a8c-serving-cert\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.434258 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-metrics-tls\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.435028 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-audit\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.435091 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c531fa6e-de28-476b-8b34-aca8b0e2cc56-config\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.438628 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.438716 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.438733 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gwvtn"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.441389 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.442726 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dhkkd"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.442868 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.442965 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dfzgf"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.444882 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prtz7\" (UniqueName: \"kubernetes.io/projected/2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8-kube-api-access-prtz7\") pod \"openshift-controller-manager-operator-756b6f6bc6-hdz59\" (UID: \"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.446895 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a89359b3-9f5c-4d38-8bf8-eb833252867b-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.446972 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9a81a913-b0f8-44c1-b1ab-dbeab680f536-metrics-tls\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.447021 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c531fa6e-de28-476b-8b34-aca8b0e2cc56-serving-cert\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.448198 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.450492 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.455837 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shv7w\" (UniqueName: \"kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w\") pod \"controller-manager-879f6c89f-x6jmv\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.451640 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.457191 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.457209 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.457220 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.457634 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knxkf\" (UniqueName: \"kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf\") pod \"route-controller-manager-6576b87f9c-pbfgr\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.460089 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5qn2m"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.460899 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.461378 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7p28"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.463446 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.464893 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.465255 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.466399 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjmw9\" (UniqueName: \"kubernetes.io/projected/9b4f4528-cd42-4baf-92ae-b29df2f83979-kube-api-access-jjmw9\") pod \"authentication-operator-69f744f599-mlbwr\" (UID: \"9b4f4528-cd42-4baf-92ae-b29df2f83979\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.466552 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.467830 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.469783 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.471224 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.471777 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9rsxp"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.472898 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.475425 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tf44k"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.477152 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jc6n7"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.477702 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.477799 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7r9dl"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.477961 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.478343 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.478613 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.478414 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.480154 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fmgwt"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.481541 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.484128 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.484162 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.484999 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4hp\" (UniqueName: \"kubernetes.io/projected/b8f5958e-78cf-428c-b9c0-abae011b2de4-kube-api-access-ll4hp\") pod \"machine-api-operator-5694c8668f-dfzgf\" (UID: \"b8f5958e-78cf-428c-b9c0-abae011b2de4\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.485706 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.487269 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-l6rr9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.488184 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.494480 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.496395 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l6rr9"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.514720 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt29p\" (UniqueName: \"kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p\") pod \"oauth-openshift-558db77b4-5ck2f\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519170 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be583ef4-64bc-485e-8f93-d48e090f8197-proxy-tls\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519258 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519300 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ea889c30-b820-47fa-8232-f96ed56ba8e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519327 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-config\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519352 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-webhook-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519414 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/afdc07ef-bbdc-4788-9393-fc47b4fb2601-tmpfs\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519441 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519476 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bxqf\" (UniqueName: \"kubernetes.io/projected/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-kube-api-access-9bxqf\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519506 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-images\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519587 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl4j9\" (UniqueName: \"kubernetes.io/projected/71f88dc0-76ba-49bd-8d25-87454497d61d-kube-api-access-xl4j9\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519618 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519683 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519734 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svtjn\" (UniqueName: \"kubernetes.io/projected/be583ef4-64bc-485e-8f93-d48e090f8197-kube-api-access-svtjn\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519762 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtzlq\" (UniqueName: \"kubernetes.io/projected/ea889c30-b820-47fa-8232-f96ed56ba8e1-kube-api-access-rtzlq\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519802 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.519922 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca87c694-02c0-4b6f-a4f0-5fd16777f406-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520111 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-apiservice-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520185 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca87c694-02c0-4b6f-a4f0-5fd16777f406-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520265 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jgm6\" (UniqueName: \"kubernetes.io/projected/afdc07ef-bbdc-4788-9393-fc47b4fb2601-kube-api-access-4jgm6\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520359 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520429 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c53e15f-0e61-49a2-bb11-8b39af387be9-service-ca-bundle\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520456 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f88dc0-76ba-49bd-8d25-87454497d61d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520582 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71f88dc0-76ba-49bd-8d25-87454497d61d-proxy-tls\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520671 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkspc\" (UniqueName: \"kubernetes.io/projected/5c53e15f-0e61-49a2-bb11-8b39af387be9-kube-api-access-tkspc\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.520747 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.521043 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-stats-auth\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.521070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-default-certificate\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.521113 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca87c694-02c0-4b6f-a4f0-5fd16777f406-config\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.521139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb644\" (UniqueName: \"kubernetes.io/projected/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-kube-api-access-qb644\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.521162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-metrics-certs\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.523036 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/71f88dc0-76ba-49bd-8d25-87454497d61d-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.523428 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bv7c7\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-kube-api-access-bv7c7\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.526167 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tf44k"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.536570 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzxs9\" (UniqueName: \"kubernetes.io/projected/5d25df07-ad4c-4a02-bd0b-241e69a4f0f4-kube-api-access-jzxs9\") pod \"openshift-config-operator-7777fb866f-6nzgh\" (UID: \"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.554846 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3cabd47c-351d-4858-bc6f-a158170d9e9a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-fsw2t\" (UID: \"3cabd47c-351d-4858-bc6f-a158170d9e9a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.578399 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.584313 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfdh\" (UniqueName: \"kubernetes.io/projected/2228e29c-3b0e-4358-91a2-dcf925981bda-kube-api-access-hjfdh\") pod \"apiserver-7bbb656c7d-vncj2\" (UID: \"2228e29c-3b0e-4358-91a2-dcf925981bda\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.600138 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.604031 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.622857 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be583ef4-64bc-485e-8f93-d48e090f8197-proxy-tls\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.622904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.622957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-webhook-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623002 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/afdc07ef-bbdc-4788-9393-fc47b4fb2601-tmpfs\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623028 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623047 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bxqf\" (UniqueName: \"kubernetes.io/projected/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-kube-api-access-9bxqf\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623069 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-images\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623141 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svtjn\" (UniqueName: \"kubernetes.io/projected/be583ef4-64bc-485e-8f93-d48e090f8197-kube-api-access-svtjn\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623258 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-apiservice-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jgm6\" (UniqueName: \"kubernetes.io/projected/afdc07ef-bbdc-4788-9393-fc47b4fb2601-kube-api-access-4jgm6\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.623324 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.624289 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/afdc07ef-bbdc-4788-9393-fc47b4fb2601-tmpfs\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.624342 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-auth-proxy-config\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.624514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.638279 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.640951 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.658607 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.667177 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.678168 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.679216 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.682734 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.692314 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.698765 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.700890 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.710946 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-default-certificate\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.718847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.728796 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-stats-auth\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.738785 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.746725 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.747820 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5c53e15f-0e61-49a2-bb11-8b39af387be9-metrics-certs\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.748555 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.750846 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.760412 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.779167 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.784383 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c53e15f-0e61-49a2-bb11-8b39af387be9-service-ca-bundle\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.794323 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" Jan 21 10:39:12 crc kubenswrapper[4745]: W0121 10:39:12.795276 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2525cc3f_c2ee_4eb8_b50f_6672d6ffe3a8.slice/crio-e238b9971c6f0c47b5a00f782ba6712f8dc1c7b60bc98b85acb8f07c32f2f53b WatchSource:0}: Error finding container e238b9971c6f0c47b5a00f782ba6712f8dc1c7b60bc98b85acb8f07c32f2f53b: Status 404 returned error can't find the container with id e238b9971c6f0c47b5a00f782ba6712f8dc1c7b60bc98b85acb8f07c32f2f53b Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.807086 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.812801 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.821188 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.865862 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkwst\" (UniqueName: \"kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst\") pod \"console-f9d7485db-j4phh\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.879151 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.886794 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/ea889c30-b820-47fa-8232-f96ed56ba8e1-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.901723 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.909466 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/71f88dc0-76ba-49bd-8d25-87454497d61d-proxy-tls\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.919285 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.923369 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.941111 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.958806 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.965118 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" event={"ID":"a25f7cf6-d63e-48f4-a43a-623ee2cf7908","Type":"ContainerStarted","Data":"9a39867b83fd30970030b47957a50a4c4d63c554968d60b4792796a1473b12fe"} Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.966788 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" event={"ID":"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8","Type":"ContainerStarted","Data":"e238b9971c6f0c47b5a00f782ba6712f8dc1c7b60bc98b85acb8f07c32f2f53b"} Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.970876 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca87c694-02c0-4b6f-a4f0-5fd16777f406-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.985602 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.996434 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca87c694-02c0-4b6f-a4f0-5fd16777f406-config\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.998503 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:39:12 crc kubenswrapper[4745]: I0121 10:39:12.999716 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.021832 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.046858 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.060151 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.062639 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-config\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.067484 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.086205 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.098154 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mlbwr"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.100519 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.120099 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.121691 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.139473 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: W0121 10:39:13.154917 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2228e29c_3b0e_4358_91a2_dcf925981bda.slice/crio-0b7581969c01ee7f44af83cd819c9b35137bc000c9c9d309d5abd394d29a6b6b WatchSource:0}: Error finding container 0b7581969c01ee7f44af83cd819c9b35137bc000c9c9d309d5abd394d29a6b6b: Status 404 returned error can't find the container with id 0b7581969c01ee7f44af83cd819c9b35137bc000c9c9d309d5abd394d29a6b6b Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.161966 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.179961 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.199493 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.224617 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.232151 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-apiservice-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.235239 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/afdc07ef-bbdc-4788-9393-fc47b4fb2601-webhook-cert\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.255046 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.261922 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/be583ef4-64bc-485e-8f93-d48e090f8197-images\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.274673 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.278290 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.292143 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-dfzgf"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.311190 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.316896 4745 request.go:700] Waited for 1.005433163s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmco-proxy-tls&limit=500&resourceVersion=0 Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.319406 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.328378 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.333800 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/be583ef4-64bc-485e-8f93-d48e090f8197-proxy-tls\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.340013 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.357768 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.359822 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.376951 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.378164 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.379997 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.401088 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.405204 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.406028 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.419566 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.428171 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t"] Jan 21 10:39:13 crc kubenswrapper[4745]: W0121 10:39:13.436405 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d25df07_ad4c_4a02_bd0b_241e69a4f0f4.slice/crio-07aba9c0354fe38b5a2469a5c6364a55537c8abdf2bd5c88f5fbab00013110a7 WatchSource:0}: Error finding container 07aba9c0354fe38b5a2469a5c6364a55537c8abdf2bd5c88f5fbab00013110a7: Status 404 returned error can't find the container with id 07aba9c0354fe38b5a2469a5c6364a55537c8abdf2bd5c88f5fbab00013110a7 Jan 21 10:39:13 crc kubenswrapper[4745]: W0121 10:39:13.437294 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9896d393_c134_4abe_ac04_1da7e6ea3aed.slice/crio-4823ba21160b376dc2ed3287dd45d0383278ee7036269e06c92e3fb4d3ae6e70 WatchSource:0}: Error finding container 4823ba21160b376dc2ed3287dd45d0383278ee7036269e06c92e3fb4d3ae6e70: Status 404 returned error can't find the container with id 4823ba21160b376dc2ed3287dd45d0383278ee7036269e06c92e3fb4d3ae6e70 Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.438021 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: W0121 10:39:13.449889 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cabd47c_351d_4858_bc6f_a158170d9e9a.slice/crio-000d8891443992781672698da124fd399c0c52ab66f22f2e304b742885cd550c WatchSource:0}: Error finding container 000d8891443992781672698da124fd399c0c52ab66f22f2e304b742885cd550c: Status 404 returned error can't find the container with id 000d8891443992781672698da124fd399c0c52ab66f22f2e304b742885cd550c Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.464866 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.479834 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.499463 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.519673 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.539578 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.559570 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.585074 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.606958 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.618039 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.638171 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.660498 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.678857 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.699296 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.719801 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.782676 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv2j2\" (UniqueName: \"kubernetes.io/projected/a89359b3-9f5c-4d38-8bf8-eb833252867b-kube-api-access-nv2j2\") pod \"openshift-apiserver-operator-796bbdcf4f-srbtm\" (UID: \"a89359b3-9f5c-4d38-8bf8-eb833252867b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.805655 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dchxt\" (UniqueName: \"kubernetes.io/projected/9a81a913-b0f8-44c1-b1ab-dbeab680f536-kube-api-access-dchxt\") pod \"dns-operator-744455d44c-5qn2m\" (UID: \"9a81a913-b0f8-44c1-b1ab-dbeab680f536\") " pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.815967 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.818552 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.840103 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.859209 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.879577 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.906078 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.920583 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.939188 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.959586 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.976304 4745 generic.go:334] "Generic (PLEG): container finished" podID="2228e29c-3b0e-4358-91a2-dcf925981bda" containerID="4d061dcb40f48151d4d6966b15e9d9192e7401add4ab60099fb5db889c1a84a8" exitCode=0 Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.976456 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" event={"ID":"2228e29c-3b0e-4358-91a2-dcf925981bda","Type":"ContainerDied","Data":"4d061dcb40f48151d4d6966b15e9d9192e7401add4ab60099fb5db889c1a84a8"} Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.976519 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" event={"ID":"2228e29c-3b0e-4358-91a2-dcf925981bda","Type":"ContainerStarted","Data":"0b7581969c01ee7f44af83cd819c9b35137bc000c9c9d309d5abd394d29a6b6b"} Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.989116 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.994295 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" event={"ID":"9b4f4528-cd42-4baf-92ae-b29df2f83979","Type":"ContainerStarted","Data":"be5b42410acb45fe5748ab5924f4cfced1db50e2e27c43f5b40ea27aba1be096"} Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.994361 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" event={"ID":"9b4f4528-cd42-4baf-92ae-b29df2f83979","Type":"ContainerStarted","Data":"e6d2144cfd57c0395e39c10e8405b8ba09fa54d396912c9b65c200e4ba10f149"} Jan 21 10:39:13 crc kubenswrapper[4745]: I0121 10:39:13.998726 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.009383 4745 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5ck2f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.009473 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.011341 4745 generic.go:334] "Generic (PLEG): container finished" podID="5d25df07-ad4c-4a02-bd0b-241e69a4f0f4" containerID="a2fced2f8c10a128f1836388a5aa3507208ec89a55dcd357482bd0735b7668e4" exitCode=0 Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.020945 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021601 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" event={"ID":"9896d393-c134-4abe-ac04-1da7e6ea3aed","Type":"ContainerStarted","Data":"b9c1cc26369606b702a2a7976adfea9d28cf584161d0e9a2206b2e356ce23280"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021671 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021687 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" event={"ID":"9896d393-c134-4abe-ac04-1da7e6ea3aed","Type":"ContainerStarted","Data":"4823ba21160b376dc2ed3287dd45d0383278ee7036269e06c92e3fb4d3ae6e70"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" event={"ID":"3cabd47c-351d-4858-bc6f-a158170d9e9a","Type":"ContainerStarted","Data":"8f3aaaa14c4cb0bf82be2dd4a2ddae4838d76b1af78bfe61d3cd19dec93a56c8"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021713 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" event={"ID":"3cabd47c-351d-4858-bc6f-a158170d9e9a","Type":"ContainerStarted","Data":"000d8891443992781672698da124fd399c0c52ab66f22f2e304b742885cd550c"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021724 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" event={"ID":"b8f5958e-78cf-428c-b9c0-abae011b2de4","Type":"ContainerStarted","Data":"097af684d7526312d91c3971256ba4c74cb5d5b41a9cf4cc3175542565a186d3"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021749 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" event={"ID":"b8f5958e-78cf-428c-b9c0-abae011b2de4","Type":"ContainerStarted","Data":"e6bf639f8f385cdde4b528a4a1927bf133ce5dd011e1feed6f48419c3ccf9dc4"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" event={"ID":"b8f5958e-78cf-428c-b9c0-abae011b2de4","Type":"ContainerStarted","Data":"64d24e4993a5adc94e12611d25bb971e0d9856451d8929d73e07e6e6610a7d30"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021851 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" event={"ID":"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4","Type":"ContainerDied","Data":"a2fced2f8c10a128f1836388a5aa3507208ec89a55dcd357482bd0735b7668e4"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.021919 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" event={"ID":"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4","Type":"ContainerStarted","Data":"07aba9c0354fe38b5a2469a5c6364a55537c8abdf2bd5c88f5fbab00013110a7"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.035433 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j4phh" event={"ID":"284744f3-7eb6-4977-87c8-5c311188f840","Type":"ContainerStarted","Data":"4e07c5a2f3d033b0e81dd61e9f6fb02e10c065b9399cc0297873f9ae965f9184"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.035524 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j4phh" event={"ID":"284744f3-7eb6-4977-87c8-5c311188f840","Type":"ContainerStarted","Data":"368575159fa50e38f5f63e47eb5df159cab8f85feda7e139b8cca66e049e585a"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.039290 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.047879 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" event={"ID":"2525cc3f-c2ee-4eb8-b50f-6672d6ffe3a8","Type":"ContainerStarted","Data":"a4f259c59b34ab199871f91118be693f74d2ea835c7f3f5877469ffb55728a5c"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.049758 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" event={"ID":"03658e3a-6a55-4326-9ab1-9ff0583f55ed","Type":"ContainerStarted","Data":"841613b74e80c8e2a1ee24f5fe43aa3c38eacca2977ac18660bcd58ba1de19cb"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.049789 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" event={"ID":"03658e3a-6a55-4326-9ab1-9ff0583f55ed","Type":"ContainerStarted","Data":"57020ecef1aac18819510da788f44b895d41e4d921390bb9b51f6397ba43d904"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.050242 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.051319 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" event={"ID":"a25f7cf6-d63e-48f4-a43a-623ee2cf7908","Type":"ContainerStarted","Data":"91779cd83f9cc81c41e34014cf49576a02007a9fb25c7c5e6faa2b9c152137a1"} Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.051773 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.051864 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.052982 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x6jmv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.053026 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.063771 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.077778 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.078826 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.088672 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.103234 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.117869 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.170793 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.174174 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccr4b\" (UniqueName: \"kubernetes.io/projected/28428682-3f1f-4077-887e-f1570b385a8c-kube-api-access-ccr4b\") pod \"console-operator-58897d9998-9dn2q\" (UID: \"28428682-3f1f-4077-887e-f1570b385a8c\") " pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.215617 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vj29f\" (UniqueName: \"kubernetes.io/projected/c531fa6e-de28-476b-8b34-aca8b0e2cc56-kube-api-access-vj29f\") pod \"apiserver-76f77b778f-n7p28\" (UID: \"c531fa6e-de28-476b-8b34-aca8b0e2cc56\") " pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.234695 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xxps\" (UniqueName: \"kubernetes.io/projected/5fb54c7c-4796-4d7d-8fce-519b5323c2ad-kube-api-access-5xxps\") pod \"ingress-operator-5b745b69d9-2ljcm\" (UID: \"5fb54c7c-4796-4d7d-8fce-519b5323c2ad\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.250412 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmkm\" (UniqueName: \"kubernetes.io/projected/fe3c7d57-12a7-426c-8c02-fe7f24949bae-kube-api-access-5tmkm\") pod \"downloads-7954f5f757-gwvtn\" (UID: \"fe3c7d57-12a7-426c-8c02-fe7f24949bae\") " pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.273341 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6g6qx\" (UniqueName: \"kubernetes.io/projected/c1f4c3a6-f097-4220-a03d-a34e2e70027a-kube-api-access-6g6qx\") pod \"cluster-samples-operator-665b6dd947-w6snb\" (UID: \"c1f4c3a6-f097-4220-a03d-a34e2e70027a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.285044 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.297736 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4x2c\" (UniqueName: \"kubernetes.io/projected/ba46635c-6397-49d9-9500-8c6e6c0fc4c1-kube-api-access-f4x2c\") pod \"machine-approver-56656f9798-mm9f9\" (UID: \"ba46635c-6397-49d9-9500-8c6e6c0fc4c1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.310594 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.324690 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.324806 4745 request.go:700] Waited for 1.846459801s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.328664 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.333685 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.345918 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.346462 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.369151 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.369902 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.375730 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.378715 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.388156 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.431453 4745 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.432150 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.439026 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.464233 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.491093 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.511759 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.569479 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl4j9\" (UniqueName: \"kubernetes.io/projected/71f88dc0-76ba-49bd-8d25-87454497d61d-kube-api-access-xl4j9\") pod \"machine-config-controller-84d6567774-mxdcm\" (UID: \"71f88dc0-76ba-49bd-8d25-87454497d61d\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.604409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtzlq\" (UniqueName: \"kubernetes.io/projected/ea889c30-b820-47fa-8232-f96ed56ba8e1-kube-api-access-rtzlq\") pod \"multus-admission-controller-857f4d67dd-dhkkd\" (UID: \"ea889c30-b820-47fa-8232-f96ed56ba8e1\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.621087 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f49241c8-5cc4-49da-be3b-9e6f39dbcc04-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-v5dhp\" (UID: \"f49241c8-5cc4-49da-be3b-9e6f39dbcc04\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.645693 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkspc\" (UniqueName: \"kubernetes.io/projected/5c53e15f-0e61-49a2-bb11-8b39af387be9-kube-api-access-tkspc\") pod \"router-default-5444994796-tk5j9\" (UID: \"5c53e15f-0e61-49a2-bb11-8b39af387be9\") " pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.660614 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb644\" (UniqueName: \"kubernetes.io/projected/1eb90eab-f69a-4fef-aef1-b8f4473b91fd-kube-api-access-qb644\") pod \"control-plane-machine-set-operator-78cbb6b69f-nfgt5\" (UID: \"1eb90eab-f69a-4fef-aef1-b8f4473b91fd\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.687976 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bxqf\" (UniqueName: \"kubernetes.io/projected/73b9d8b3-fe5c-47c4-bc95-10fc459b4754-kube-api-access-9bxqf\") pod \"kube-storage-version-migrator-operator-b67b599dd-hhs27\" (UID: \"73b9d8b3-fe5c-47c4-bc95-10fc459b4754\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.696738 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.706162 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jgm6\" (UniqueName: \"kubernetes.io/projected/afdc07ef-bbdc-4788-9393-fc47b4fb2601-kube-api-access-4jgm6\") pod \"packageserver-d55dfcdfc-bchs9\" (UID: \"afdc07ef-bbdc-4788-9393-fc47b4fb2601\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.720058 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.722722 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm"] Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.725815 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.733609 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svtjn\" (UniqueName: \"kubernetes.io/projected/be583ef4-64bc-485e-8f93-d48e090f8197-kube-api-access-svtjn\") pod \"machine-config-operator-74547568cd-nzchk\" (UID: \"be583ef4-64bc-485e-8f93-d48e090f8197\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.737714 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.743297 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.761691 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.762172 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.777816 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.855648 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5qn2m"] Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.920040 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n7p28"] Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.942630 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb"] Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.958350 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-9dn2q"] Jan 21 10:39:14 crc kubenswrapper[4745]: I0121 10:39:14.985867 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp"] Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.043196 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-gwvtn"] Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.045339 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm"] Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.064217 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" event={"ID":"ba46635c-6397-49d9-9500-8c6e6c0fc4c1","Type":"ContainerStarted","Data":"d3fed01c607095607e6700280121121540da2e23895fa62c4355e0523356879e"} Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.065608 4745 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5ck2f container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" start-of-body= Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.065657 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.7:6443/healthz\": dial tcp 10.217.0.7:6443: connect: connection refused" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.071620 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.097756 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.111894 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.113520 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-955l6\" (UID: \"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.123979 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kn7c\" (UniqueName: \"kubernetes.io/projected/f9c06282-abf7-4d46-90df-6d48394448cf-kube-api-access-6kn7c\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.124043 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.124232 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.124429 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.124867 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.125122 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dl4sm\" (UniqueName: \"kubernetes.io/projected/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-kube-api-access-dl4sm\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.125796 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.125994 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-key\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.131610 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ca87c694-02c0-4b6f-a4f0-5fd16777f406-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-smff5\" (UID: \"ca87c694-02c0-4b6f-a4f0-5fd16777f406\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.137013 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:15.636990491 +0000 UTC m=+140.097778079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.139400 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jkcb\" (UniqueName: \"kubernetes.io/projected/9f884d1f-fcd5-4179-9350-6b41b3d136b7-kube-api-access-2jkcb\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142363 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142420 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142651 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142741 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-srv-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142826 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.142889 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mzl6\" (UniqueName: \"kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.173602 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.179144 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr4gd\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.179243 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.181001 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.181292 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.189489 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f452\" (UniqueName: \"kubernetes.io/projected/a50b422a-12cf-4f7f-b13d-5e9c21daeca9-kube-api-access-2f452\") pod \"migrator-59844c95c7-97m6c\" (UID: \"a50b422a-12cf-4f7f-b13d-5e9c21daeca9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.189708 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5mxq\" (UniqueName: \"kubernetes.io/projected/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-kube-api-access-b5mxq\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.189808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-srv-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.190090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-cabundle\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.190209 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295441 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295810 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116f95bd-c6f1-4137-b3c7-72396c7b4d03-cert\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295872 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295899 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-metrics-tls\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-etcd-client\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295968 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.295996 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dl4sm\" (UniqueName: \"kubernetes.io/projected/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-kube-api-access-dl4sm\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296036 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-certs\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296066 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-registration-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296108 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296133 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dff5125-5877-469c-9630-f935a526a97e-config\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296197 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-key\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296226 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4bg8\" (UniqueName: \"kubernetes.io/projected/541d8319-86e8-436d-92a9-6564dafb8388-kube-api-access-k4bg8\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296320 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jkcb\" (UniqueName: \"kubernetes.io/projected/9f884d1f-fcd5-4179-9350-6b41b3d136b7-kube-api-access-2jkcb\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296368 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-csi-data-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-socket-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296422 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-plugins-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296477 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296505 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296564 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296594 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mt2q\" (UniqueName: \"kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296637 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296674 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl8gz\" (UniqueName: \"kubernetes.io/projected/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-kube-api-access-kl8gz\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296711 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-srv-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296744 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296771 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts5gt\" (UniqueName: \"kubernetes.io/projected/6dff5125-5877-469c-9630-f935a526a97e-kube-api-access-ts5gt\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296800 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mzl6\" (UniqueName: \"kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296822 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-service-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296885 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr4gd\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296959 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.296984 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-config\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297007 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qkmx\" (UniqueName: \"kubernetes.io/projected/116f95bd-c6f1-4137-b3c7-72396c7b4d03-kube-api-access-2qkmx\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297046 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q55lk\" (UniqueName: \"kubernetes.io/projected/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-kube-api-access-q55lk\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297075 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297100 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-config-volume\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297125 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-serving-cert\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297148 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dff5125-5877-469c-9630-f935a526a97e-serving-cert\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297193 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f452\" (UniqueName: \"kubernetes.io/projected/a50b422a-12cf-4f7f-b13d-5e9c21daeca9-kube-api-access-2f452\") pod \"migrator-59844c95c7-97m6c\" (UID: \"a50b422a-12cf-4f7f-b13d-5e9c21daeca9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297236 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-srv-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297291 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5mxq\" (UniqueName: \"kubernetes.io/projected/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-kube-api-access-b5mxq\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297316 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-cabundle\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297345 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297433 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr2g2\" (UniqueName: \"kubernetes.io/projected/f89ae10c-7af8-4f4e-bba6-10172a20919f-kube-api-access-tr2g2\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297467 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kn7c\" (UniqueName: \"kubernetes.io/projected/f9c06282-abf7-4d46-90df-6d48394448cf-kube-api-access-6kn7c\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297511 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297574 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297600 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-node-bootstrap-token\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297655 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-mountpoint-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.297697 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.298276 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:15.798250863 +0000 UTC m=+140.259038461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.312938 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.322739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.327412 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.332354 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.339101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-cabundle\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.348479 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr4gd\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.349483 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.369728 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.369794 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.370175 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-signing-key\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.370840 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-srv-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.374428 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-srv-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.379325 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f884d1f-fcd5-4179-9350-6b41b3d136b7-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.381215 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f9c06282-abf7-4d46-90df-6d48394448cf-profile-collector-cert\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.382922 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.386501 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dl4sm\" (UniqueName: \"kubernetes.io/projected/b6e57768-b27f-42e3-9bd5-2e8eac4f06ce-kube-api-access-dl4sm\") pod \"service-ca-9c57cc56f-7r9dl\" (UID: \"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce\") " pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.396816 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f452\" (UniqueName: \"kubernetes.io/projected/a50b422a-12cf-4f7f-b13d-5e9c21daeca9-kube-api-access-2f452\") pod \"migrator-59844c95c7-97m6c\" (UID: \"a50b422a-12cf-4f7f-b13d-5e9c21daeca9\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.398923 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-socket-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.398980 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-plugins-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399018 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399061 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mt2q\" (UniqueName: \"kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399103 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl8gz\" (UniqueName: \"kubernetes.io/projected/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-kube-api-access-kl8gz\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts5gt\" (UniqueName: \"kubernetes.io/projected/6dff5125-5877-469c-9630-f935a526a97e-kube-api-access-ts5gt\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399173 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-service-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399207 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-config\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399231 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qkmx\" (UniqueName: \"kubernetes.io/projected/116f95bd-c6f1-4137-b3c7-72396c7b4d03-kube-api-access-2qkmx\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399252 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q55lk\" (UniqueName: \"kubernetes.io/projected/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-kube-api-access-q55lk\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399276 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-config-volume\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399300 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-serving-cert\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399325 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dff5125-5877-469c-9630-f935a526a97e-serving-cert\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr2g2\" (UniqueName: \"kubernetes.io/projected/f89ae10c-7af8-4f4e-bba6-10172a20919f-kube-api-access-tr2g2\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399427 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-node-bootstrap-token\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399455 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-mountpoint-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399506 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399553 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116f95bd-c6f1-4137-b3c7-72396c7b4d03-cert\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399577 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399595 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-metrics-tls\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399620 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-etcd-client\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399644 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-certs\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399665 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-registration-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399688 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399712 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dff5125-5877-469c-9630-f935a526a97e-config\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399736 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4bg8\" (UniqueName: \"kubernetes.io/projected/541d8319-86e8-436d-92a9-6564dafb8388-kube-api-access-k4bg8\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.399782 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-csi-data-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.400077 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-csi-data-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.400422 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-socket-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.400469 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-plugins-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.401385 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.406696 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:15.906675788 +0000 UTC m=+140.367463386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.407289 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-config\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.407923 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-config-volume\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.408357 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-service-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.412081 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.415230 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jkcb\" (UniqueName: \"kubernetes.io/projected/9f884d1f-fcd5-4179-9350-6b41b3d136b7-kube-api-access-2jkcb\") pod \"olm-operator-6b444d44fb-lw9m4\" (UID: \"9f884d1f-fcd5-4179-9350-6b41b3d136b7\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.415334 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/116f95bd-c6f1-4137-b3c7-72396c7b4d03-cert\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.415568 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-registration-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.416306 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/541d8319-86e8-436d-92a9-6564dafb8388-etcd-ca\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.417317 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6dff5125-5877-469c-9630-f935a526a97e-config\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.420365 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-certs\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.420640 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kn7c\" (UniqueName: \"kubernetes.io/projected/f9c06282-abf7-4d46-90df-6d48394448cf-kube-api-access-6kn7c\") pod \"catalog-operator-68c6474976-n5ft4\" (UID: \"f9c06282-abf7-4d46-90df-6d48394448cf\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.422050 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-serving-cert\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.428158 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-mountpoint-dir\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.431624 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.432507 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f89ae10c-7af8-4f4e-bba6-10172a20919f-node-bootstrap-token\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.433144 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5mxq\" (UniqueName: \"kubernetes.io/projected/f5752ba7-8465-4a19-b7a3-d2b4effe5f23-kube-api-access-b5mxq\") pod \"package-server-manager-789f6589d5-szgtz\" (UID: \"f5752ba7-8465-4a19-b7a3-d2b4effe5f23\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.435325 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6dff5125-5877-469c-9630-f935a526a97e-serving-cert\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.461441 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.463127 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mzl6\" (UniqueName: \"kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6\") pod \"marketplace-operator-79b997595-fcg2s\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.469733 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-metrics-tls\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.472295 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.474106 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts5gt\" (UniqueName: \"kubernetes.io/projected/6dff5125-5877-469c-9630-f935a526a97e-kube-api-access-ts5gt\") pod \"service-ca-operator-777779d784-k2kq9\" (UID: \"6dff5125-5877-469c-9630-f935a526a97e\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.476438 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/541d8319-86e8-436d-92a9-6564dafb8388-etcd-client\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.482250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl8gz\" (UniqueName: \"kubernetes.io/projected/64b535a9-9c1a-47e2-b92c-bc8d6560ed44-kube-api-access-kl8gz\") pod \"dns-default-9rsxp\" (UID: \"64b535a9-9c1a-47e2-b92c-bc8d6560ed44\") " pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.495031 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mt2q\" (UniqueName: \"kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q\") pod \"collect-profiles-29483190-5pfx2\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.497404 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.497586 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.498554 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.502837 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.504386 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.00435947 +0000 UTC m=+140.465147068 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.508198 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.519375 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.524830 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q55lk\" (UniqueName: \"kubernetes.io/projected/59cfcfcd-7ed9-4f60-85ad-fcb228dc1895-kube-api-access-q55lk\") pod \"csi-hostpathplugin-tf44k\" (UID: \"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895\") " pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.525669 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qkmx\" (UniqueName: \"kubernetes.io/projected/116f95bd-c6f1-4137-b3c7-72396c7b4d03-kube-api-access-2qkmx\") pod \"ingress-canary-l6rr9\" (UID: \"116f95bd-c6f1-4137-b3c7-72396c7b4d03\") " pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.555126 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.581494 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4bg8\" (UniqueName: \"kubernetes.io/projected/541d8319-86e8-436d-92a9-6564dafb8388-kube-api-access-k4bg8\") pod \"etcd-operator-b45778765-fmgwt\" (UID: \"541d8319-86e8-436d-92a9-6564dafb8388\") " pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.592510 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.607691 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.607996 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.107983799 +0000 UTC m=+140.568771397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.628132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-l6rr9" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.631605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr2g2\" (UniqueName: \"kubernetes.io/projected/f89ae10c-7af8-4f4e-bba6-10172a20919f-kube-api-access-tr2g2\") pod \"machine-config-server-jc6n7\" (UID: \"f89ae10c-7af8-4f4e-bba6-10172a20919f\") " pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.661932 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.708733 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.709207 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.209187493 +0000 UTC m=+140.669975091 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.811647 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.812007 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.311994171 +0000 UTC m=+140.772781769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.847090 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.870923 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jc6n7" Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.914019 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.914268 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.414213362 +0000 UTC m=+140.875000960 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:15 crc kubenswrapper[4745]: I0121 10:39:15.916046 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:15 crc kubenswrapper[4745]: E0121 10:39:15.916389 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.416376099 +0000 UTC m=+140.877163697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.019648 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.021592 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.52156329 +0000 UTC m=+140.982350888 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.066437 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6"] Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.144667 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.149645 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.649623213 +0000 UTC m=+141.110410811 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.182302 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" event={"ID":"a89359b3-9f5c-4d38-8bf8-eb833252867b","Type":"ContainerStarted","Data":"fc8e48ebc35d058be113f1226e4a7d4ea84d87500efb0cdba3de82b38d9deceb"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.238823 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" event={"ID":"f49241c8-5cc4-49da-be3b-9e6f39dbcc04","Type":"ContainerStarted","Data":"cf46f858dd1df4de9ffded7dd6d9753d3d5ee01fc456c6ad44ab43a082ef95eb"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.251003 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.251339 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.751314062 +0000 UTC m=+141.212101660 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.289903 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dhkkd"] Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.352088 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.352432 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.852418834 +0000 UTC m=+141.313206432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.396907 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" podStartSLOduration=117.396883359 podStartE2EDuration="1m57.396883359s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:16.396258883 +0000 UTC m=+140.857046481" watchObservedRunningTime="2026-01-21 10:39:16.396883359 +0000 UTC m=+140.857670957" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.401192 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" event={"ID":"28428682-3f1f-4077-887e-f1570b385a8c","Type":"ContainerStarted","Data":"c715a1390a49bf8e9439bdc388184dde688535c59b49332463e620f8ed7ef80b"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.401267 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" event={"ID":"28428682-3f1f-4077-887e-f1570b385a8c","Type":"ContainerStarted","Data":"7ba4948cc46063bcc83e978623cc8986f8dc5ae2880d1652af854006df79a874"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.401864 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.412070 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-9dn2q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.412129 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" podUID="28428682-3f1f-4077-887e-f1570b385a8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.458321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.459310 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:16.959245477 +0000 UTC m=+141.420033225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.471880 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tk5j9" event={"ID":"5c53e15f-0e61-49a2-bb11-8b39af387be9","Type":"ContainerStarted","Data":"d5c712ad053dd996a45d13b0c33353124c6f645d5739faf8aa9675e6b0d13aaa"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.493629 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" event={"ID":"5fb54c7c-4796-4d7d-8fce-519b5323c2ad","Type":"ContainerStarted","Data":"a5e4dce6f8600d2fe294575fef1384dd4a8b904176ab805dd4facdb0fd9afc92"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.496275 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" event={"ID":"c1f4c3a6-f097-4220-a03d-a34e2e70027a","Type":"ContainerStarted","Data":"1302830311dc4c4d6442fbcf2aef71b0dacb44aded770dea2f8d91bcf12a5b31"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.499272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" event={"ID":"ba46635c-6397-49d9-9500-8c6e6c0fc4c1","Type":"ContainerStarted","Data":"82319a432da36a86e26dac638a007c0ad95b4ca8e2be518973b479c8d4d38a9c"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.531150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" event={"ID":"c531fa6e-de28-476b-8b34-aca8b0e2cc56","Type":"ContainerStarted","Data":"e0fb9bb530374ff8ce117e4320d39f52e8cc9de16bd6cb794378d6d38d197945"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.538109 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" event={"ID":"9a81a913-b0f8-44c1-b1ab-dbeab680f536","Type":"ContainerStarted","Data":"47e9aafe7a7e076fe7be036e38a32238116d4a7c31d8c0cbec79c1bf639c5d01"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.541710 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwvtn" event={"ID":"fe3c7d57-12a7-426c-8c02-fe7f24949bae","Type":"ContainerStarted","Data":"5684092ce8d5f616e0731bb5d86ee76de7b32fd84c405aa59e1af3079debb890"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.552696 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" event={"ID":"5d25df07-ad4c-4a02-bd0b-241e69a4f0f4","Type":"ContainerStarted","Data":"b51e963c48201a4d1247cd2f2f13f1dd720fed1ac38af248a6241921e71f6b6d"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.552740 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.566441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.568165 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.068145455 +0000 UTC m=+141.528933273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.570730 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" event={"ID":"2228e29c-3b0e-4358-91a2-dcf925981bda","Type":"ContainerStarted","Data":"cd4a4d6c25b7bce56e4d54c8c9c40ac1d0c2d92489f01b2b5a78c55c33f1a816"} Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.599123 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" podStartSLOduration=116.599101494 podStartE2EDuration="1m56.599101494s" podCreationTimestamp="2026-01-21 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:16.491990843 +0000 UTC m=+140.952778441" watchObservedRunningTime="2026-01-21 10:39:16.599101494 +0000 UTC m=+141.059889092" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.668092 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.668693 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.168674082 +0000 UTC m=+141.629461680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.730139 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-hdz59" podStartSLOduration=117.730117436 podStartE2EDuration="1m57.730117436s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:16.727842626 +0000 UTC m=+141.188630234" watchObservedRunningTime="2026-01-21 10:39:16.730117436 +0000 UTC m=+141.190905034" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.773445 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.786151 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.286135067 +0000 UTC m=+141.746922665 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.883671 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.884110 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.384083956 +0000 UTC m=+141.844871554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.913198 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-fsw2t" podStartSLOduration=117.913171074 podStartE2EDuration="1m57.913171074s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:16.80443807 +0000 UTC m=+141.265225668" watchObservedRunningTime="2026-01-21 10:39:16.913171074 +0000 UTC m=+141.373958672" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.914690 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mlbwr" podStartSLOduration=119.914682075 podStartE2EDuration="1m59.914682075s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:16.911977683 +0000 UTC m=+141.372765281" watchObservedRunningTime="2026-01-21 10:39:16.914682075 +0000 UTC m=+141.375469673" Jan 21 10:39:16 crc kubenswrapper[4745]: I0121 10:39:16.985523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:16 crc kubenswrapper[4745]: E0121 10:39:16.985962 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.485948468 +0000 UTC m=+141.946736066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.092871 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.093608 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:17.593584022 +0000 UTC m=+142.054371620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.103101 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk"] Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.498187 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5"] Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.500759 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.501237 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.001221596 +0000 UTC m=+142.462009194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.602908 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.603457 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.103425507 +0000 UTC m=+142.564213105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.609582 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5"] Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.610385 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.610494 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.681853 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" event={"ID":"ea889c30-b820-47fa-8232-f96ed56ba8e1","Type":"ContainerStarted","Data":"c4e49ed1eededeafa535c7cd27ac431c92b0a3c2d656ac2f2e3a795f3a0f90dd"} Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.688099 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jc6n7" event={"ID":"f89ae10c-7af8-4f4e-bba6-10172a20919f","Type":"ContainerStarted","Data":"3a2468dcae84aae524652f7614198121184a30fd23753118de224eaae7107758"} Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.695061 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwvtn" event={"ID":"fe3c7d57-12a7-426c-8c02-fe7f24949bae","Type":"ContainerStarted","Data":"ae7b6b58cb07db3a844fca358214187ef4d244c7505afacb0ddb2b08e02bd901"} Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.696188 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.702335 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" event={"ID":"a89359b3-9f5c-4d38-8bf8-eb833252867b","Type":"ContainerStarted","Data":"86407162d554387d38d7e2d275698db05dac15b377fd5cbb2e99db4d222fca54"} Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.704624 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.705072 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.205054923 +0000 UTC m=+142.665842521 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.713236 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" event={"ID":"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b","Type":"ContainerStarted","Data":"754ccc27f82de96abe71e608efd21f537bc49e62fa369a941c9637aa6267f089"} Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.732698 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.732799 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.813486 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.815994 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.315960004 +0000 UTC m=+142.776747602 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:17 crc kubenswrapper[4745]: I0121 10:39:17.917806 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:17 crc kubenswrapper[4745]: E0121 10:39:17.918292 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.418275649 +0000 UTC m=+142.879063247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.051943 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.052248 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.552227039 +0000 UTC m=+143.013014637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.053651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" event={"ID":"5fb54c7c-4796-4d7d-8fce-519b5323c2ad","Type":"ContainerStarted","Data":"b814e076dfdfd76b0e727c68e0833e780239f9b670fd04241d05d36d3154535c"} Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.054673 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-9dn2q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.054697 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" podUID="28428682-3f1f-4077-887e-f1570b385a8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.152745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.155712 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.655692584 +0000 UTC m=+143.116480182 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.253152 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.253429 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.753414985 +0000 UTC m=+143.214202583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.355294 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.356764 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.856744177 +0000 UTC m=+143.317531775 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.443620 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podStartSLOduration=119.44350451 podStartE2EDuration="1m59.44350451s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.43367006 +0000 UTC m=+142.894457668" watchObservedRunningTime="2026-01-21 10:39:18.44350451 +0000 UTC m=+142.904292108" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.457262 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.463864 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.963823637 +0000 UTC m=+143.424611235 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.464100 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.464410 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:18.964402953 +0000 UTC m=+143.425190551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.511743 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-j4phh" podStartSLOduration=120.511724323 podStartE2EDuration="2m0.511724323s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.511085616 +0000 UTC m=+142.971873204" watchObservedRunningTime="2026-01-21 10:39:18.511724323 +0000 UTC m=+142.972511911" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.570272 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.571072 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.071045941 +0000 UTC m=+143.531833539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.579997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.580362 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.080345447 +0000 UTC m=+143.541133045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.620097 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" podStartSLOduration=121.620080497 podStartE2EDuration="2m1.620080497s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.619356338 +0000 UTC m=+143.080143936" watchObservedRunningTime="2026-01-21 10:39:18.620080497 +0000 UTC m=+143.080868095" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.691432 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.692505 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.19248014 +0000 UTC m=+143.653267738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.769554 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.795483 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.795883 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.295866993 +0000 UTC m=+143.756654591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.798842 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6nzgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 10:39:18 crc kubenswrapper[4745]: [+]log ok Jan 21 10:39:18 crc kubenswrapper[4745]: [+]poststarthook/max-in-flight-filter ok Jan 21 10:39:18 crc kubenswrapper[4745]: [-]poststarthook/storage-object-count-tracker-hook failed: reason withheld Jan 21 10:39:18 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.798911 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" podUID="5d25df07-ad4c-4a02-bd0b-241e69a4f0f4" containerName="openshift-config-operator" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.803061 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.866180 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" podStartSLOduration=119.86616018 podStartE2EDuration="1m59.86616018s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.854299867 +0000 UTC m=+143.315087465" watchObservedRunningTime="2026-01-21 10:39:18.86616018 +0000 UTC m=+143.326947778" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.907811 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:18 crc kubenswrapper[4745]: E0121 10:39:18.908293 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.408278744 +0000 UTC m=+143.869066332 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.913734 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" podStartSLOduration=120.913703297 podStartE2EDuration="2m0.913703297s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.897215021 +0000 UTC m=+143.358002619" watchObservedRunningTime="2026-01-21 10:39:18.913703297 +0000 UTC m=+143.374490895" Jan 21 10:39:18 crc kubenswrapper[4745]: I0121 10:39:18.963802 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" podStartSLOduration=120.963778671 podStartE2EDuration="2m0.963778671s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:18.938183194 +0000 UTC m=+143.398970792" watchObservedRunningTime="2026-01-21 10:39:18.963778671 +0000 UTC m=+143.424566269" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.021461 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-srbtm" podStartSLOduration=122.021445845 podStartE2EDuration="2m2.021445845s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:19.020824598 +0000 UTC m=+143.481612196" watchObservedRunningTime="2026-01-21 10:39:19.021445845 +0000 UTC m=+143.482233443" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.023455 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.023769 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.523756585 +0000 UTC m=+143.984544183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.051629 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm"] Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.052627 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9"] Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.068918 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4"] Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.079861 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" event={"ID":"ca87c694-02c0-4b6f-a4f0-5fd16777f406","Type":"ContainerStarted","Data":"87fc13a6bb17d176e0a0dccc81b1bbec2b28a1ce4a8bb5509325eb9fd1987ea8"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.083229 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-gwvtn" podStartSLOduration=120.083206057 podStartE2EDuration="2m0.083206057s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:19.069974837 +0000 UTC m=+143.530762435" watchObservedRunningTime="2026-01-21 10:39:19.083206057 +0000 UTC m=+143.543993655" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.092827 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" event={"ID":"9a81a913-b0f8-44c1-b1ab-dbeab680f536","Type":"ContainerStarted","Data":"a5de809a4e7c8ee5d2d9ffbe5684d05e64cf65a47a3cdb32471fbbb65d5ff81c"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.100238 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" event={"ID":"1eb90eab-f69a-4fef-aef1-b8f4473b91fd","Type":"ContainerStarted","Data":"3b37e42023b9946327a31c826fa83b8331ffcc2c327b033cb825b562f76ee74e"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.141312 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.141591 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.64157714 +0000 UTC m=+144.102364728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.145919 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-tk5j9" event={"ID":"5c53e15f-0e61-49a2-bb11-8b39af387be9","Type":"ContainerStarted","Data":"9b7bb26abc87d47a15260b823b18547de9a404bc0483a8e35320f4aec1d0e98d"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.152438 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" event={"ID":"c1f4c3a6-f097-4220-a03d-a34e2e70027a","Type":"ContainerStarted","Data":"d641bc494eafadd17a7b84f659320ba57a0375720845f30527206afd26a0a612"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.153272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" event={"ID":"be583ef4-64bc-485e-8f93-d48e090f8197","Type":"ContainerStarted","Data":"b7ce66844320599a1749a9bb1559339470f929ca42ccd5638235ca42a1c3e953"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.173420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" event={"ID":"ba46635c-6397-49d9-9500-8c6e6c0fc4c1","Type":"ContainerStarted","Data":"64cc4da4c377e5973d7a06f80ed58e93568b4f882b7088d918508de71e0f868b"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.200116 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" event={"ID":"f49241c8-5cc4-49da-be3b-9e6f39dbcc04","Type":"ContainerStarted","Data":"3032335f6cb21cf5f930bcc24d9456b44e4a8514e1360f7737c6b7c5ca54deeb"} Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.200239 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-9dn2q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" start-of-body= Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.200280 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" podUID="28428682-3f1f-4077-887e-f1570b385a8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": dial tcp 10.217.0.16:8443: connect: connection refused" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.203899 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.203980 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.235619 4745 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-vncj2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]log ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]etcd ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]etcd-readiness ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 10:39:19 crc kubenswrapper[4745]: [-]informer-sync failed: reason withheld Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/max-in-flight-filter ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 21 10:39:19 crc kubenswrapper[4745]: [+]shutdown ok Jan 21 10:39:19 crc kubenswrapper[4745]: readyz check failed Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.235717 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" podUID="2228e29c-3b0e-4358-91a2-dcf925981bda" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.254303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.256647 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.756632781 +0000 UTC m=+144.217420369 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.365223 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.365825 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.865799996 +0000 UTC m=+144.326587594 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.470055 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.470334 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:19.970321658 +0000 UTC m=+144.431109256 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.571153 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.571580 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.071562444 +0000 UTC m=+144.532350042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.674774 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.675209 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.175189053 +0000 UTC m=+144.635976651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.776165 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.776397 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.276362297 +0000 UTC m=+144.737149895 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.776552 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.776990 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.276979313 +0000 UTC m=+144.737767081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.878505 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.878776 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.378737113 +0000 UTC m=+144.839524711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.878967 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.879612 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.379583425 +0000 UTC m=+144.840371193 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.981120 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.981347 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.481306213 +0000 UTC m=+144.942093821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:19 crc kubenswrapper[4745]: I0121 10:39:19.981767 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:19 crc kubenswrapper[4745]: E0121 10:39:19.982209 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.482189717 +0000 UTC m=+144.942977315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.082811 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.083850 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.583820953 +0000 UTC m=+145.044608551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.185452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.186212 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.686199428 +0000 UTC m=+145.146987026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.226755 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.228774 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" event={"ID":"f9c06282-abf7-4d46-90df-6d48394448cf","Type":"ContainerStarted","Data":"3bccc4ce2a74770d0749214cc148d017e691ced203030b7a438a43b926108780"} Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.232233 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-l6rr9"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.238599 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" event={"ID":"afdc07ef-bbdc-4788-9393-fc47b4fb2601","Type":"ContainerStarted","Data":"3a7c6b3e5e25d25e080b2a770eafb29094c3e13a5854158082e8c9e6abe7e768"} Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.247687 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-9rsxp"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.262314 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" event={"ID":"71f88dc0-76ba-49bd-8d25-87454497d61d","Type":"ContainerStarted","Data":"ed7e0dabe1f070a7384c19802bc9a063fea8cc709f81732510d9d7da4a1f05f2"} Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.265482 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.265653 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.289476 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.291938 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.791908152 +0000 UTC m=+145.252695760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.301155 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-vncj2" Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.316183 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.328790 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.371712 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.391153 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-fmgwt"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.392664 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.393231 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.893206689 +0000 UTC m=+145.353994347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.393300 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4"] Jan 21 10:39:20 crc kubenswrapper[4745]: W0121 10:39:20.400099 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64b535a9_9c1a_47e2_b92c_bc8d6560ed44.slice/crio-389cc8da2f46d06882d51998b1c5f8ebd8571ba2c9859f7c958017ca59ceff9c WatchSource:0}: Error finding container 389cc8da2f46d06882d51998b1c5f8ebd8571ba2c9859f7c958017ca59ceff9c: Status 404 returned error can't find the container with id 389cc8da2f46d06882d51998b1c5f8ebd8571ba2c9859f7c958017ca59ceff9c Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.423586 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-tf44k"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.447710 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-tk5j9" podStartSLOduration=121.44768616 podStartE2EDuration="2m1.44768616s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:20.377823863 +0000 UTC m=+144.838611451" watchObservedRunningTime="2026-01-21 10:39:20.44768616 +0000 UTC m=+144.908473758" Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.448572 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.493453 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.493742 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:20.993729326 +0000 UTC m=+145.454516924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.505321 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c"] Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.513013 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7r9dl"] Jan 21 10:39:20 crc kubenswrapper[4745]: W0121 10:39:20.519090 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b9d8b3_fe5c_47c4_bc95_10fc459b4754.slice/crio-bfccdca50b1af8387284e9e267314681ea66774d2ed850f067495a79b69d4221 WatchSource:0}: Error finding container bfccdca50b1af8387284e9e267314681ea66774d2ed850f067495a79b69d4221: Status 404 returned error can't find the container with id bfccdca50b1af8387284e9e267314681ea66774d2ed850f067495a79b69d4221 Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.556795 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-v5dhp" podStartSLOduration=121.556760762 podStartE2EDuration="2m1.556760762s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:20.51392707 +0000 UTC m=+144.974714668" watchObservedRunningTime="2026-01-21 10:39:20.556760762 +0000 UTC m=+145.017548360" Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.602129 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.1021031 +0000 UTC m=+145.562890688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.604961 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:20 crc kubenswrapper[4745]: W0121 10:39:20.627351 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb0e48bf_347d_4985_b809_a25cc11db944.slice/crio-59dea8809b5d2f951f1b10ea4da9f83675c5071ac03a73eaaacee22c2f31328a WatchSource:0}: Error finding container 59dea8809b5d2f951f1b10ea4da9f83675c5071ac03a73eaaacee22c2f31328a: Status 404 returned error can't find the container with id 59dea8809b5d2f951f1b10ea4da9f83675c5071ac03a73eaaacee22c2f31328a Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.713321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.713782 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.213751391 +0000 UTC m=+145.674539129 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.722408 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.724300 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.724375 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.844339 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.845581 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.345564755 +0000 UTC m=+145.806352353 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:20 crc kubenswrapper[4745]: I0121 10:39:20.946304 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:20 crc kubenswrapper[4745]: E0121 10:39:20.946675 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.446659587 +0000 UTC m=+145.907447185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.047676 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.048190 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.54816857 +0000 UTC m=+146.008956168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.235855 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.236352 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.736323182 +0000 UTC m=+146.197110780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.345521 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.345903 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.845889888 +0000 UTC m=+146.306677476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.445968 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.446540 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.946492427 +0000 UTC m=+146.407280025 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.446734 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.447036 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:21.947021601 +0000 UTC m=+146.407809189 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.475983 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" event={"ID":"9f884d1f-fcd5-4179-9350-6b41b3d136b7","Type":"ContainerStarted","Data":"001011ebe828487411128cb85441ce8db4d2f77c96e9d619aa71944e932930fb"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.503187 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" event={"ID":"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895","Type":"ContainerStarted","Data":"8155283da73d05339b4f3c76e949d3c4dc4d3d0ef00c5d0b82ebefe1653d001e"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.504705 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" event={"ID":"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce","Type":"ContainerStarted","Data":"6c59d174988a928162934a63e610625f0903b9fc9d414de4a2c963c11201da53"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.509884 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" event={"ID":"e1c4364e-4898-4cd5-9ac7-9c800820e244","Type":"ContainerStarted","Data":"0ddeb35afa2f5d2970f0b950c9435553ffc0b965f937db0d8a4d67655a83f094"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.511804 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" event={"ID":"ca87c694-02c0-4b6f-a4f0-5fd16777f406","Type":"ContainerStarted","Data":"a617bbb3a3646d31717515b0b29fa29dd778c2b74b5b8c487e9dc1b8935f5090"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.514541 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" event={"ID":"be583ef4-64bc-485e-8f93-d48e090f8197","Type":"ContainerStarted","Data":"7b74abf42a084bd2e3ca53ca9b11cfd821c1004d7c6d3549077d82047caed828"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.515313 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" event={"ID":"6dff5125-5877-469c-9630-f935a526a97e","Type":"ContainerStarted","Data":"bce23810b0f3db3ef6f4557747af538b65fde102b0ac70058fd480cb8a7d7d34"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.516693 4745 generic.go:334] "Generic (PLEG): container finished" podID="c531fa6e-de28-476b-8b34-aca8b0e2cc56" containerID="db0126e0e22c38bb15ec7cc2dc47736a066123f2ffcb30a126f8b48001a71e8d" exitCode=0 Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.516738 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" event={"ID":"c531fa6e-de28-476b-8b34-aca8b0e2cc56","Type":"ContainerDied","Data":"db0126e0e22c38bb15ec7cc2dc47736a066123f2ffcb30a126f8b48001a71e8d"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.521241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" event={"ID":"f5752ba7-8465-4a19-b7a3-d2b4effe5f23","Type":"ContainerStarted","Data":"7251dfa8722439fde3e04d04e47dab74bbc1f7706c9fd243fbff0fcf03f944ed"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.522031 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l6rr9" event={"ID":"116f95bd-c6f1-4137-b3c7-72396c7b4d03","Type":"ContainerStarted","Data":"df9f4556201f68e090018d77c4f5d820cca08d1a0a7a9140ff12c7b5b2b375f7"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.522775 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" event={"ID":"a50b422a-12cf-4f7f-b13d-5e9c21daeca9","Type":"ContainerStarted","Data":"0e3363b69753c5212c7a9d786de3774a54dfb07313fd4a57f150c05a42af2ea4"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.524258 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" event={"ID":"2610ed1a-32e8-4c01-b0cf-cf5ebe19cf3b","Type":"ContainerStarted","Data":"74c62ff3600bcaecd70ded33e285d89a1afba9da7a9e38934aeeeb2467abe5df"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.536878 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" event={"ID":"ea889c30-b820-47fa-8232-f96ed56ba8e1","Type":"ContainerStarted","Data":"ae97c8c6a5ef81c6891aed5279a66d46d6f4afa33ad60725287d6330fa8c0580"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.547455 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" event={"ID":"db0e48bf-347d-4985-b809-a25cc11db944","Type":"ContainerStarted","Data":"59dea8809b5d2f951f1b10ea4da9f83675c5071ac03a73eaaacee22c2f31328a"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.548350 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.549234 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.049218762 +0000 UTC m=+146.510006360 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.649891 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.650224 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.150211931 +0000 UTC m=+146.610999529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.680293 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jc6n7" event={"ID":"f89ae10c-7af8-4f4e-bba6-10172a20919f","Type":"ContainerStarted","Data":"92e3ae36e0a72312eb24761c841e0c991c8735ee3bf857a7204ce990e0c50d16"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.699290 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" event={"ID":"541d8319-86e8-436d-92a9-6564dafb8388","Type":"ContainerStarted","Data":"05041f9b6ccb47713bcf06b96736cd81d4fcdef1c1de16f442e26012673ef815"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.708801 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rsxp" event={"ID":"64b535a9-9c1a-47e2-b92c-bc8d6560ed44","Type":"ContainerStarted","Data":"389cc8da2f46d06882d51998b1c5f8ebd8571ba2c9859f7c958017ca59ceff9c"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.726445 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-955l6" podStartSLOduration=122.726416195 podStartE2EDuration="2m2.726416195s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:21.725760008 +0000 UTC m=+146.186547606" watchObservedRunningTime="2026-01-21 10:39:21.726416195 +0000 UTC m=+146.187203793" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.726583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" event={"ID":"73b9d8b3-fe5c-47c4-bc95-10fc459b4754","Type":"ContainerStarted","Data":"bfccdca50b1af8387284e9e267314681ea66774d2ed850f067495a79b69d4221"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.727107 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-smff5" podStartSLOduration=122.727098823 podStartE2EDuration="2m2.727098823s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:21.561423984 +0000 UTC m=+146.022211582" watchObservedRunningTime="2026-01-21 10:39:21.727098823 +0000 UTC m=+146.187886421" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.727668 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.727716 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.733937 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" event={"ID":"9a81a913-b0f8-44c1-b1ab-dbeab680f536","Type":"ContainerStarted","Data":"b7e14c80f1eb3292f31339214d7bdf6efcc2bdb2e67021d44631a26144b5c31e"} Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.751501 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.753371 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.253344467 +0000 UTC m=+146.714132075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.836747 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jc6n7" podStartSLOduration=9.836719871 podStartE2EDuration="9.836719871s" podCreationTimestamp="2026-01-21 10:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:21.834098502 +0000 UTC m=+146.294886090" watchObservedRunningTime="2026-01-21 10:39:21.836719871 +0000 UTC m=+146.297507479" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.857368 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.857839 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.357823268 +0000 UTC m=+146.818610866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.889139 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-mm9f9" podStartSLOduration=124.889120995 podStartE2EDuration="2m4.889120995s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:21.887107643 +0000 UTC m=+146.347895241" watchObservedRunningTime="2026-01-21 10:39:21.889120995 +0000 UTC m=+146.349908593" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.914191 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5qn2m" podStartSLOduration=122.914176107 podStartE2EDuration="2m2.914176107s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:21.914027304 +0000 UTC m=+146.374814902" watchObservedRunningTime="2026-01-21 10:39:21.914176107 +0000 UTC m=+146.374963705" Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.959977 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.960372 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.460329228 +0000 UTC m=+146.921116836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:21 crc kubenswrapper[4745]: I0121 10:39:21.960700 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:21 crc kubenswrapper[4745]: E0121 10:39:21.961131 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.461122498 +0000 UTC m=+146.921910096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.062490 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.063448 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.563411022 +0000 UTC m=+147.024198620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.164221 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.164511 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.664498624 +0000 UTC m=+147.125286222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.268286 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.268701 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.768676237 +0000 UTC m=+147.229463835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.371600 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.372342 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.872330737 +0000 UTC m=+147.333118335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.477608 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.478082 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:22.978058141 +0000 UTC m=+147.438845739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.580822 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.581362 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.08133831 +0000 UTC m=+147.542125908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.697660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.698253 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.198226669 +0000 UTC m=+147.659014267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.714029 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.760015 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.760141 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.808123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.809595 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.309578472 +0000 UTC m=+147.770366300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.862325 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" event={"ID":"afdc07ef-bbdc-4788-9393-fc47b4fb2601","Type":"ContainerStarted","Data":"570a4c33a2d9fd2081512cb11163f8905796093c54af09f1e4044098dc8ac7b5"} Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.864180 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.866303 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bchs9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.866333 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podUID="afdc07ef-bbdc-4788-9393-fc47b4fb2601" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.871179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-l6rr9" event={"ID":"116f95bd-c6f1-4137-b3c7-72396c7b4d03","Type":"ContainerStarted","Data":"7e2e8e704edb5b1377ecd4194a09fb58cecc93a211496f565f8fdbf3e4d66fd1"} Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.889076 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" event={"ID":"541d8319-86e8-436d-92a9-6564dafb8388","Type":"ContainerStarted","Data":"e5ba2436bc5fe626a3fbf2630641de0928df744162cd50a7490e043491cd029d"} Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.910842 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:22 crc kubenswrapper[4745]: E0121 10:39:22.911392 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.411367842 +0000 UTC m=+147.872155440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.920613 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.921304 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.924709 4745 patch_prober.go:28] interesting pod/console-f9d7485db-j4phh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.924755 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j4phh" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.949110 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" event={"ID":"9f884d1f-fcd5-4179-9350-6b41b3d136b7","Type":"ContainerStarted","Data":"6b8b434c64422a043ec2bee94d2d72cf25a45dc895f3bbf8c99d4ad1ac03db40"} Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.949696 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.969662 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podStartSLOduration=123.96952184 podStartE2EDuration="2m3.96952184s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:22.909556515 +0000 UTC m=+147.370344123" watchObservedRunningTime="2026-01-21 10:39:22.96952184 +0000 UTC m=+147.430309438" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.972363 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw9m4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.972452 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" podUID="9f884d1f-fcd5-4179-9350-6b41b3d136b7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.980690 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-l6rr9" podStartSLOduration=10.980643554 podStartE2EDuration="10.980643554s" podCreationTimestamp="2026-01-21 10:39:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:22.94116589 +0000 UTC m=+147.401953488" watchObservedRunningTime="2026-01-21 10:39:22.980643554 +0000 UTC m=+147.441431152" Jan 21 10:39:22 crc kubenswrapper[4745]: I0121 10:39:22.982312 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-fmgwt" podStartSLOduration=123.982305728 podStartE2EDuration="2m3.982305728s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:22.973249108 +0000 UTC m=+147.434036706" watchObservedRunningTime="2026-01-21 10:39:22.982305728 +0000 UTC m=+147.443093326" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.017206 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" event={"ID":"f9c06282-abf7-4d46-90df-6d48394448cf","Type":"ContainerStarted","Data":"a6890135d3d70f9a168259c994a02e6c329dddf80c581cf9a0196f4d889ec250"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.018139 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.018932 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.029130 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" podStartSLOduration=124.029101814 podStartE2EDuration="2m4.029101814s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:23.023926767 +0000 UTC m=+147.484714365" watchObservedRunningTime="2026-01-21 10:39:23.029101814 +0000 UTC m=+147.489889402" Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.032308 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.532289468 +0000 UTC m=+147.993077066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.047396 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5ft4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.047447 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" podUID="f9c06282-abf7-4d46-90df-6d48394448cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.082207 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" event={"ID":"71f88dc0-76ba-49bd-8d25-87454497d61d","Type":"ContainerStarted","Data":"c42afbb5b397422fdf8af25727e31bb1520b0a6041981134a7a161548a100287"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.108352 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" event={"ID":"1eb90eab-f69a-4fef-aef1-b8f4473b91fd","Type":"ContainerStarted","Data":"9284f348613950fe8ddb570fd916f1eb9a387e9c04775f0f0bb68a2783e02c50"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.142169 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.142318 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.642302086 +0000 UTC m=+148.103089674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.142541 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.142806 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.642800479 +0000 UTC m=+148.103588077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.165225 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" podStartSLOduration=124.165202841 podStartE2EDuration="2m4.165202841s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:23.146696762 +0000 UTC m=+147.607484360" watchObservedRunningTime="2026-01-21 10:39:23.165202841 +0000 UTC m=+147.625990429" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.236320 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" event={"ID":"e1c4364e-4898-4cd5-9ac7-9c800820e244","Type":"ContainerStarted","Data":"2f5f2ac464f8a0752429ee2e88b11be2441b5c21280fb92d81d31fc9b4b23321"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.238782 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-nfgt5" podStartSLOduration=124.238762715 podStartE2EDuration="2m4.238762715s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:23.236936407 +0000 UTC m=+147.697724005" watchObservedRunningTime="2026-01-21 10:39:23.238762715 +0000 UTC m=+147.699550313" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.243509 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.244088 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.7438581 +0000 UTC m=+148.204645688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.254987 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" event={"ID":"f5752ba7-8465-4a19-b7a3-d2b4effe5f23","Type":"ContainerStarted","Data":"fb47b248bb03db2b3beead2e683641aa84400d4975d7f133f963219fb17a51c8"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.303329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" event={"ID":"5fb54c7c-4796-4d7d-8fce-519b5323c2ad","Type":"ContainerStarted","Data":"a2ed4b39efafb5b251a3d1c4da03a212f021d727672e521b212d69186bd0be31"} Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.350190 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.351132 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.851119665 +0000 UTC m=+148.311907263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.396346 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" podStartSLOduration=125.396328949 podStartE2EDuration="2m5.396328949s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:23.349427501 +0000 UTC m=+147.810215099" watchObservedRunningTime="2026-01-21 10:39:23.396328949 +0000 UTC m=+147.857116547" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.451703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.454514 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:23.954473017 +0000 UTC m=+148.415260775 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.554833 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.588437 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.088414907 +0000 UTC m=+148.549202505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.661665 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.662019 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.162002471 +0000 UTC m=+148.622790069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.789303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.789817 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.289795929 +0000 UTC m=+148.750583527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.796822 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:23 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:23 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:23 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.796887 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:23 crc kubenswrapper[4745]: I0121 10:39:23.900438 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:23 crc kubenswrapper[4745]: E0121 10:39:23.900969 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.400951776 +0000 UTC m=+148.861739374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.004351 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.004817 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.504802962 +0000 UTC m=+148.965590560 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.159914 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.160179 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.160212 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.160251 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.160855 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.660824985 +0000 UTC m=+149.121612583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.162557 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.177409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.179112 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.270641 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.270748 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.272850 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.772807914 +0000 UTC m=+149.233595682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.308516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.341054 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.341082 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.341136 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.341144 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.343645 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" event={"ID":"b6e57768-b27f-42e3-9bd5-2e8eac4f06ce","Type":"ContainerStarted","Data":"22401bef00d3ad52073b0c2fe1efa7eda75b52966cfae6bd308b92dbcb847c87"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.422382 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.423602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.423906 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:24.923891768 +0000 UTC m=+149.384679366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.424614 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.424893 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.428983 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" event={"ID":"c1f4c3a6-f097-4220-a03d-a34e2e70027a","Type":"ContainerStarted","Data":"dfe2c9ab912241fc185205a78085cf5912d70b53b12cccf9e0581c69aa4d94ce"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.430629 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" event={"ID":"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895","Type":"ContainerStarted","Data":"58f2d0b696a5e9bb22768114428e0ad370e8419c27de72fcd9e7edc2306e3e12"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.464346 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2ljcm" podStartSLOduration=125.464330747 podStartE2EDuration="2m5.464330747s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:23.397789139 +0000 UTC m=+147.858576737" watchObservedRunningTime="2026-01-21 10:39:24.464330747 +0000 UTC m=+148.925118345" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.464475 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7r9dl" podStartSLOduration=124.464470591 podStartE2EDuration="2m4.464470591s" podCreationTimestamp="2026-01-21 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:24.462925829 +0000 UTC m=+148.923713427" watchObservedRunningTime="2026-01-21 10:39:24.464470591 +0000 UTC m=+148.925258189" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.465965 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rsxp" event={"ID":"64b535a9-9c1a-47e2-b92c-bc8d6560ed44","Type":"ContainerStarted","Data":"fe94c296dc7df604dde9d4dfcaf36343b3e9657e39275c7c856177134d9860bf"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.495633 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" event={"ID":"73b9d8b3-fe5c-47c4-bc95-10fc459b4754","Type":"ContainerStarted","Data":"0576f244fdaa4477d536df1db238514fd9c5798c15ea7194ef7f1d9c82802d78"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.521323 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" event={"ID":"6dff5125-5877-469c-9630-f935a526a97e","Type":"ContainerStarted","Data":"bf553dac89d63fb5f6faaa0a7ca17da602a614a8b2267210e785eac2b8d3a68a"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.524103 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" event={"ID":"ea889c30-b820-47fa-8232-f96ed56ba8e1","Type":"ContainerStarted","Data":"1e987385e1b856cfdba2743eb46884ba1e880513acfc46046f091e677a1824a7"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.533352 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.534970 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.034957324 +0000 UTC m=+149.495744922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.543408 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.584934 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" event={"ID":"db0e48bf-347d-4985-b809-a25cc11db944","Type":"ContainerStarted","Data":"54195a2c3c6db705824f88ec8d350e9918b296e763b6ac307428033a2a0d69c9"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.585933 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.636789 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fcg2s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.636845 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.637162 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.637463 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.137438792 +0000 UTC m=+149.598226540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.651609 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-w6snb" podStartSLOduration=126.651595626 podStartE2EDuration="2m6.651595626s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:24.580372904 +0000 UTC m=+149.041160502" watchObservedRunningTime="2026-01-21 10:39:24.651595626 +0000 UTC m=+149.112383224" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.654277 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" event={"ID":"f5752ba7-8465-4a19-b7a3-d2b4effe5f23","Type":"ContainerStarted","Data":"4335c35b12622be76e968c216eb3b46106a058bdb53659c0798e15dcbaac6ff3"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.654968 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.660306 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" event={"ID":"c531fa6e-de28-476b-8b34-aca8b0e2cc56","Type":"ContainerStarted","Data":"a307e9101d2feee7b90ad24ab6b3c8b5b00b21f29baf7258b9539b8eec54ec44"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.662036 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" event={"ID":"71f88dc0-76ba-49bd-8d25-87454497d61d","Type":"ContainerStarted","Data":"f562f36b2f4c0f98be9c4578f8dbbd28a607c82637c78f7333261cdf5f1adf05"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.663562 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" event={"ID":"a50b422a-12cf-4f7f-b13d-5e9c21daeca9","Type":"ContainerStarted","Data":"5620a9862ce33dabcbd68ef7e0e34b52cb7188fb43ef349b413ed4ed48b9f6e7"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.663583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" event={"ID":"a50b422a-12cf-4f7f-b13d-5e9c21daeca9","Type":"ContainerStarted","Data":"2d20fff0b86962fefac59c247421b1b5bb56643093a8dd1d34c2e41bd844bda7"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.673487 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bchs9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.673673 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" event={"ID":"be583ef4-64bc-485e-8f93-d48e090f8197","Type":"ContainerStarted","Data":"0d765face890cbc7306140669c8affc01a0840cee1c46b5b3e8b3eb838867d46"} Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.674387 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podUID="afdc07ef-bbdc-4788-9393-fc47b4fb2601" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.676860 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw9m4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.676895 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" podUID="9f884d1f-fcd5-4179-9350-6b41b3d136b7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.725124 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.738083 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:24 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:24 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:24 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.738149 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.744645 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.745346 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.245325303 +0000 UTC m=+149.706112891 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.788346 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.789877 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bchs9 container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.789924 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podUID="afdc07ef-bbdc-4788-9393-fc47b4fb2601" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.790023 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bchs9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.790040 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podUID="afdc07ef-bbdc-4788-9393-fc47b4fb2601" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.796885 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dhkkd" podStartSLOduration=125.796869535 podStartE2EDuration="2m5.796869535s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:24.795600782 +0000 UTC m=+149.256388380" watchObservedRunningTime="2026-01-21 10:39:24.796869535 +0000 UTC m=+149.257657133" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.797133 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-k2kq9" podStartSLOduration=124.797130002 podStartE2EDuration="2m4.797130002s" podCreationTimestamp="2026-01-21 10:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:24.723064405 +0000 UTC m=+149.183852003" watchObservedRunningTime="2026-01-21 10:39:24.797130002 +0000 UTC m=+149.257917600" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.849441 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.849657 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.349611579 +0000 UTC m=+149.810399177 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.850107 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.850388 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.350376719 +0000 UTC m=+149.811164317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.895468 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-hhs27" podStartSLOduration=125.89542987 podStartE2EDuration="2m5.89542987s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:24.894371532 +0000 UTC m=+149.355159130" watchObservedRunningTime="2026-01-21 10:39:24.89542987 +0000 UTC m=+149.356217468" Jan 21 10:39:24 crc kubenswrapper[4745]: I0121 10:39:24.951175 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:24 crc kubenswrapper[4745]: E0121 10:39:24.951544 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.451517262 +0000 UTC m=+149.912304860 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.060699 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.061635 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.561613062 +0000 UTC m=+150.022400660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.117866 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-nzchk" podStartSLOduration=126.117851499 podStartE2EDuration="2m6.117851499s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.107958207 +0000 UTC m=+149.568745805" watchObservedRunningTime="2026-01-21 10:39:25.117851499 +0000 UTC m=+149.578639097" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.117964 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-mxdcm" podStartSLOduration=126.117960102 podStartE2EDuration="2m6.117960102s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.009542756 +0000 UTC m=+149.470330364" watchObservedRunningTime="2026-01-21 10:39:25.117960102 +0000 UTC m=+149.578747710" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.163145 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.163479 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.663453874 +0000 UTC m=+150.124241472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.249172 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" podStartSLOduration=126.249134639 podStartE2EDuration="2m6.249134639s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.186187325 +0000 UTC m=+149.646974923" watchObservedRunningTime="2026-01-21 10:39:25.249134639 +0000 UTC m=+149.709922237" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.264263 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.264649 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.764633698 +0000 UTC m=+150.225421306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.325453 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podStartSLOduration=126.325437155 podStartE2EDuration="2m6.325437155s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.254890071 +0000 UTC m=+149.715677679" watchObservedRunningTime="2026-01-21 10:39:25.325437155 +0000 UTC m=+149.786224753" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.359269 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" podStartSLOduration=128.359240719 podStartE2EDuration="2m8.359240719s" podCreationTimestamp="2026-01-21 10:37:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.33091118 +0000 UTC m=+149.791698768" watchObservedRunningTime="2026-01-21 10:39:25.359240719 +0000 UTC m=+149.820028317" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.367131 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.367442 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.867426405 +0000 UTC m=+150.328214003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.498852 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.499240 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:25.999228239 +0000 UTC m=+150.460015837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.531008 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-97m6c" podStartSLOduration=126.530974937 podStartE2EDuration="2m6.530974937s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.367979199 +0000 UTC m=+149.828766807" watchObservedRunningTime="2026-01-21 10:39:25.530974937 +0000 UTC m=+149.991762535" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.558516 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fcg2s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.558609 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.558927 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fcg2s container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.558978 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.600191 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.600598 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.100579347 +0000 UTC m=+150.561366945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.722968 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.723360 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.223345292 +0000 UTC m=+150.684132890 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.726966 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:25 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:25 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:25 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.727050 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.737998 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.751119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" event={"ID":"c531fa6e-de28-476b-8b34-aca8b0e2cc56","Type":"ContainerStarted","Data":"b445d18e7b80df0ea0e05abdacb482991002e8a692608a9cef27cb2c464f081b"} Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.820279 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-9rsxp" event={"ID":"64b535a9-9c1a-47e2-b92c-bc8d6560ed44","Type":"ContainerStarted","Data":"b00071d161d850ea7f988fa137d8400bcf8c719ee9b08623805fcfaf470cf981"} Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.820410 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fcg2s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.820472 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.828516 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.829602 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.32958167 +0000 UTC m=+150.790369268 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:25 crc kubenswrapper[4745]: I0121 10:39:25.932746 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:25 crc kubenswrapper[4745]: E0121 10:39:25.933194 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.433177377 +0000 UTC m=+150.893964975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:25.998830 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-9rsxp" podStartSLOduration=14.998813072 podStartE2EDuration="14.998813072s" podCreationTimestamp="2026-01-21 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:25.997943629 +0000 UTC m=+150.458731227" watchObservedRunningTime="2026-01-21 10:39:25.998813072 +0000 UTC m=+150.459600670" Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.034427 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.034841 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.534825724 +0000 UTC m=+150.995613322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.107628 4745 csr.go:261] certificate signing request csr-c6m8m is approved, waiting to be issued Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.119509 4745 csr.go:257] certificate signing request csr-c6m8m is issued Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.141272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.141683 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.641671237 +0000 UTC m=+151.102458835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.266406 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.266755 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.766726163 +0000 UTC m=+151.227513761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.267078 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.267587 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.767568905 +0000 UTC m=+151.228356503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.371378 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.372149 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.872120829 +0000 UTC m=+151.332909037 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.476162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.476472 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:26.976461576 +0000 UTC m=+151.437249174 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.580387 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.580821 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.080801964 +0000 UTC m=+151.541589562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.682577 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.682948 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.182936944 +0000 UTC m=+151.643724542 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.732506 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:26 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:26 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:26 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.732597 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.787654 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.787811 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.287788654 +0000 UTC m=+151.748576252 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.787849 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.788166 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.288155324 +0000 UTC m=+151.748942922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.820831 4745 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-bchs9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.820923 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" podUID="afdc07ef-bbdc-4788-9393-fc47b4fb2601" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.845717 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"0e78d172d47cf19012259c1e59360dc21b7d8c5ab2b9adf090be3007fca818f3"} Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.851569 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1c64b4f3b7ea53558c163e821b6e1dae8219e175989497701db94b90aab434e8"} Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.853709 4745 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-fcg2s container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" start-of-body= Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.853748 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.28:8080/healthz\": dial tcp 10.217.0.28:8080: connect: connection refused" Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.853989 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.894602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.895170 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.395151162 +0000 UTC m=+151.855938760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:26 crc kubenswrapper[4745]: I0121 10:39:26.996624 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:26 crc kubenswrapper[4745]: E0121 10:39:26.997241 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.49722561 +0000 UTC m=+151.958013208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: W0121 10:39:27.079780 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-4e0ff2160589ba8bf929e95a10096a9906d9fd42ad189b87d867f8e23f06f799 WatchSource:0}: Error finding container 4e0ff2160589ba8bf929e95a10096a9906d9fd42ad189b87d867f8e23f06f799: Status 404 returned error can't find the container with id 4e0ff2160589ba8bf929e95a10096a9906d9fd42ad189b87d867f8e23f06f799 Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.097605 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.097976 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.597955522 +0000 UTC m=+152.058743120 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.124795 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-21 10:34:26 +0000 UTC, rotation deadline is 2026-12-15 18:03:09.226555395 +0000 UTC Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.124843 4745 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7879h23m42.101715313s for next certificate rotation Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.203296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.203603 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.703592143 +0000 UTC m=+152.164379741 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.304545 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.305008 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.804993064 +0000 UTC m=+152.265780662 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.406393 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.406391 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.406658 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:27.90664527 +0000 UTC m=+152.367432858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.407682 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.413979 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.492043 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.512770 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.512960 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.01293653 +0000 UTC m=+152.473724128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.513056 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-249dm\" (UniqueName: \"kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.513142 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.513167 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.513239 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.513523 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.013515365 +0000 UTC m=+152.474302963 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.614062 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.614268 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.614312 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.614356 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-249dm\" (UniqueName: \"kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.614907 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.114881484 +0000 UTC m=+152.575669082 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.616251 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.617339 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.670384 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.671635 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.678919 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.717223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.717287 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.717367 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj44x\" (UniqueName: \"kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.717397 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.717747 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.217736722 +0000 UTC m=+152.678524320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.773225 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:27 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:27 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:27 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.773284 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.821174 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.821405 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.321369221 +0000 UTC m=+152.782156829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.821452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.821493 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.821584 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj44x\" (UniqueName: \"kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.821612 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.821915 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.321903455 +0000 UTC m=+152.782691253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.822077 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.822449 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.822513 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.859456 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8b6bc54d89a09300e15df699f8213725e6178655fe3ce71da1a98d23ee73ceb1"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.860202 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"4e0ff2160589ba8bf929e95a10096a9906d9fd42ad189b87d867f8e23f06f799"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.861103 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.862058 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d47b702aed964e30e7214f8dc0fd37a01239b4d21088b15e8d3cb5f2e7a2a1ba"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.862064 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.873443 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-249dm\" (UniqueName: \"kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm\") pod \"community-operators-52d7q\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.881440 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" event={"ID":"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895","Type":"ContainerStarted","Data":"bea489a8cc7846fd1bef9a06f18d8eb8abbfd27064df3e7de47f32e143fdb0ca"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.888980 4745 generic.go:334] "Generic (PLEG): container finished" podID="e1c4364e-4898-4cd5-9ac7-9c800820e244" containerID="2f5f2ac464f8a0752429ee2e88b11be2441b5c21280fb92d81d31fc9b4b23321" exitCode=0 Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.889064 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" event={"ID":"e1c4364e-4898-4cd5-9ac7-9c800820e244","Type":"ContainerDied","Data":"2f5f2ac464f8a0752429ee2e88b11be2441b5c21280fb92d81d31fc9b4b23321"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.891732 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f70985e6a880ec372771a0303da83af1a501769aef3f4d3d05cc79d434ac5cea"} Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.891758 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.922725 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.923044 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.923158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:27 crc kubenswrapper[4745]: I0121 10:39:27.923194 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vp4\" (UniqueName: \"kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:27 crc kubenswrapper[4745]: E0121 10:39:27.923338 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.423309485 +0000 UTC m=+152.884097083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.007133 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj44x\" (UniqueName: \"kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x\") pod \"certified-operators-c6pc4\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.024322 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.025003 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.025152 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.025177 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vp4\" (UniqueName: \"kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.025267 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.026041 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.026094 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.526067521 +0000 UTC m=+152.986855299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.049859 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.122382 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.122444 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.127411 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.127794 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.128412 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.628386956 +0000 UTC m=+153.089174554 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.207187 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vp4\" (UniqueName: \"kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4\") pod \"community-operators-sbg8m\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.229752 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.229821 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.229850 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwfpl\" (UniqueName: \"kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.229874 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.230153 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.730142055 +0000 UTC m=+153.190929653 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.230521 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.292918 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.333120 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.333628 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.333669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwfpl\" (UniqueName: \"kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.333695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.334128 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.334204 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.834187005 +0000 UTC m=+153.294974603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.334407 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.440313 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.440702 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:28.94068987 +0000 UTC m=+153.401477468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.440817 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwfpl\" (UniqueName: \"kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl\") pod \"certified-operators-lgfp9\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.471726 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.490233 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.490928 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.492577 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.527145 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.527375 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.561113 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.561341 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.561380 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.561500 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.061483512 +0000 UTC m=+153.522271110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.592564 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.662298 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.662355 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.662389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.662488 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.662790 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.162776629 +0000 UTC m=+153.623564237 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.734246 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:28 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:28 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:28 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.734325 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.751498 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.763749 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.763962 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.263932172 +0000 UTC m=+153.724719760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.764132 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.764497 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.264478507 +0000 UTC m=+153.725266105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.834048 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.868085 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.868510 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.368492146 +0000 UTC m=+153.829279744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.940472 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" event={"ID":"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895","Type":"ContainerStarted","Data":"93265cb6369a5b72ffa32612e6d9dcdafc1278a359b757e71c2ab3d4f28abe39"} Jan 21 10:39:28 crc kubenswrapper[4745]: I0121 10:39:28.973436 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:28 crc kubenswrapper[4745]: E0121 10:39:28.973814 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.473803489 +0000 UTC m=+153.934591087 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.076403 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.076513 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.576490793 +0000 UTC m=+154.037278391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.076794 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.077121 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.577110439 +0000 UTC m=+154.037898037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.184144 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.184553 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.684517118 +0000 UTC m=+154.145304716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.286802 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.287711 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.787692625 +0000 UTC m=+154.248480223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.370757 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.372364 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.388227 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.388558 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.88854353 +0000 UTC m=+154.349331118 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.492604 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.494261 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:29.994219093 +0000 UTC m=+154.455006681 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.594617 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.595247 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.095223563 +0000 UTC m=+154.556011161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.696942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.697245 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.197233829 +0000 UTC m=+154.658021427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.728155 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:29 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:29 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:29 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.728627 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.798667 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.800749 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.802025 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.802550 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.302512771 +0000 UTC m=+154.763300369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.803767 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.815281 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.819345 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.907676 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.907742 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.907808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.907830 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hk28m\" (UniqueName: \"kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:29 crc kubenswrapper[4745]: E0121 10:39:29.908234 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.408216655 +0000 UTC m=+154.869004253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:29 crc kubenswrapper[4745]: I0121 10:39:29.954145 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.010234 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.010545 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.010611 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.010635 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hk28m\" (UniqueName: \"kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.011169 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.511145816 +0000 UTC m=+154.971933414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.011668 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.011882 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.044011 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" event={"ID":"59cfcfcd-7ed9-4f60-85ad-fcb228dc1895","Type":"ContainerStarted","Data":"86085b0fce3761bf9907c11a2078c42b85817d957364f757e1a51b59609584fb"} Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.071329 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hk28m\" (UniqueName: \"kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m\") pod \"redhat-marketplace-rgqts\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.086970 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" podStartSLOduration=19.086943629 podStartE2EDuration="19.086943629s" podCreationTimestamp="2026-01-21 10:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:30.081115155 +0000 UTC m=+154.541902763" watchObservedRunningTime="2026-01-21 10:39:30.086943629 +0000 UTC m=+154.547731227" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.116631 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerStarted","Data":"40c1bfc4568a4454d4aeb61f635e1a8b1c2e3039caaa63a5cff1961c3d81cce8"} Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.136221 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.136956 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.63694008 +0000 UTC m=+155.097727678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.145634 4745 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.177190 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.219971 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.221019 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.239711 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.241731 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.741705379 +0000 UTC m=+155.202493137 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.254624 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.283777 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.341581 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd5b4\" (UniqueName: \"kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.341641 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.341670 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.341711 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.342090 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.842062862 +0000 UTC m=+155.302850460 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.397395 4745 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n7p28 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]log ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]etcd ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/generic-apiserver-start-informers ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/max-in-flight-filter ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 21 10:39:30 crc kubenswrapper[4745]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 21 10:39:30 crc kubenswrapper[4745]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/project.openshift.io-projectcache ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-startinformers ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 21 10:39:30 crc kubenswrapper[4745]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 21 10:39:30 crc kubenswrapper[4745]: livez check failed Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.397507 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" podUID="c531fa6e-de28-476b-8b34-aca8b0e2cc56" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.410814 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.453301 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.454072 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd5b4\" (UniqueName: \"kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.454108 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.454137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.454709 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.454820 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:30.954792921 +0000 UTC m=+155.415580519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.455373 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.500566 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd5b4\" (UniqueName: \"kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4\") pod \"redhat-marketplace-lctvc\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.563322 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.563667 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:31.063655428 +0000 UTC m=+155.524443026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.599418 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.600494 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.607377 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.616173 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.646298 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.657662 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.664570 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.665033 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:31.165010747 +0000 UTC m=+155.625798345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.675438 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.730378 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:30 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:30 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:30 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.731687 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.767463 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2mt2q\" (UniqueName: \"kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q\") pod \"e1c4364e-4898-4cd5-9ac7-9c800820e244\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.767621 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume\") pod \"e1c4364e-4898-4cd5-9ac7-9c800820e244\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.767710 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume\") pod \"e1c4364e-4898-4cd5-9ac7-9c800820e244\" (UID: \"e1c4364e-4898-4cd5-9ac7-9c800820e244\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.767951 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.768018 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvhd\" (UniqueName: \"kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.768086 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.768112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.768855 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-21 10:39:31.26882831 +0000 UTC m=+155.729615908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4z5zq" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.774878 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume" (OuterVolumeSpecName: "config-volume") pod "e1c4364e-4898-4cd5-9ac7-9c800820e244" (UID: "e1c4364e-4898-4cd5-9ac7-9c800820e244"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.795828 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e1c4364e-4898-4cd5-9ac7-9c800820e244" (UID: "e1c4364e-4898-4cd5-9ac7-9c800820e244"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.807746 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q" (OuterVolumeSpecName: "kube-api-access-2mt2q") pod "e1c4364e-4898-4cd5-9ac7-9c800820e244" (UID: "e1c4364e-4898-4cd5-9ac7-9c800820e244"). InnerVolumeSpecName "kube-api-access-2mt2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.811880 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-fn62p"] Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.812221 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1c4364e-4898-4cd5-9ac7-9c800820e244" containerName="collect-profiles" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.812237 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1c4364e-4898-4cd5-9ac7-9c800820e244" containerName="collect-profiles" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.812381 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1c4364e-4898-4cd5-9ac7-9c800820e244" containerName="collect-profiles" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.813307 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.826583 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fn62p"] Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.870328 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.870801 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.870930 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvhd\" (UniqueName: \"kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.871056 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.871188 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2mt2q\" (UniqueName: \"kubernetes.io/projected/e1c4364e-4898-4cd5-9ac7-9c800820e244-kube-api-access-2mt2q\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.871258 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e1c4364e-4898-4cd5-9ac7-9c800820e244-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.871320 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e1c4364e-4898-4cd5-9ac7-9c800820e244-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.871790 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: E0121 10:39:30.871931 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-21 10:39:31.371914825 +0000 UTC m=+155.832702423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.872209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.880658 4745 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-21T10:39:30.145659471Z","Handler":null,"Name":""} Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.903102 4745 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.903542 4745 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.909429 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvhd\" (UniqueName: \"kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd\") pod \"redhat-operators-7989r\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.954029 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.976165 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.976249 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.976273 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:30 crc kubenswrapper[4745]: I0121 10:39:30.976309 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrwxr\" (UniqueName: \"kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.002556 4745 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.002605 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.077440 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4z5zq\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.078684 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.078789 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.078838 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrwxr\" (UniqueName: \"kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.079310 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.079572 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.138519 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrwxr\" (UniqueName: \"kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr\") pod \"redhat-operators-fn62p\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.171679 4745 generic.go:334] "Generic (PLEG): container finished" podID="58df78fb-8f34-4442-8547-cacf761708dd" containerID="84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491" exitCode=0 Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.171804 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerDied","Data":"84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.171851 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerStarted","Data":"f72e100195cff6a4cb9e98a5466c83bf5e8566c94130fe4bae9d0611f24e76c2"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.173185 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.178150 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.181854 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.183757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" event={"ID":"e1c4364e-4898-4cd5-9ac7-9c800820e244","Type":"ContainerDied","Data":"0ddeb35afa2f5d2970f0b950c9435553ffc0b965f937db0d8a4d67655a83f094"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.183807 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddeb35afa2f5d2970f0b950c9435553ffc0b965f937db0d8a4d67655a83f094" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.183806 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.216880 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.242234 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"210bd7df-f463-4bad-ac72-9118c21a6bbd","Type":"ContainerStarted","Data":"f42da67936cdd15e0f2f67638c6e946bf7f40a69b5d1d9759a8a44d7ce8881ce"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.274763 4745 generic.go:334] "Generic (PLEG): container finished" podID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerID="42d7a305954bf9870efb69feca1afd24ac45a65ec6e56e90ec0ad99cb436f6c5" exitCode=0 Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.275512 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerDied","Data":"42d7a305954bf9870efb69feca1afd24ac45a65ec6e56e90ec0ad99cb436f6c5"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.288724 4745 generic.go:334] "Generic (PLEG): container finished" podID="bfaacdad-12f1-4904-96db-f24427117da4" containerID="8d20fa77a126cc4d51508bec0f87f4b17a96d5b784d2ce651ba9c8bed021b1b8" exitCode=0 Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.288872 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerDied","Data":"8d20fa77a126cc4d51508bec0f87f4b17a96d5b784d2ce651ba9c8bed021b1b8"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.288916 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerStarted","Data":"53614e5a61ee09a6babb8e0b2766ec5f1c666b8e3c1212d3737a79e919c86638"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.325093 4745 generic.go:334] "Generic (PLEG): container finished" podID="9d721ed0-4c33-4912-8973-e583db1e2075" containerID="0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4" exitCode=0 Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.326321 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerDied","Data":"0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.326361 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerStarted","Data":"0dfb330a58249d562bbc6573d68d6a06acd60f851f06b6bdfa084551c8bd3183"} Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.364380 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.491945 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.610122 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.733850 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:31 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:31 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:31 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.733918 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.771657 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.772427 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.799847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.800406 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.808155 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.913550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:31 crc kubenswrapper[4745]: I0121 10:39:31.913629 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.014986 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.015390 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.015784 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.043555 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.196945 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.200160 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.397741 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerStarted","Data":"2a19956830b8dde330a56516496e14a1d6407c37bd600a5fa7df240c689e0c17"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.403076 4745 generic.go:334] "Generic (PLEG): container finished" podID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerID="49fbc28fa61864f6e8f108f29075c7b07e8310b35e54cfa8a43e9fe4cf9e5bc5" exitCode=0 Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.404013 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerDied","Data":"49fbc28fa61864f6e8f108f29075c7b07e8310b35e54cfa8a43e9fe4cf9e5bc5"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.404043 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerStarted","Data":"1fd0a032fdeeca86471924714a7681e8913f24f212fdea214e80b509f4f931d1"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.411569 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"210bd7df-f463-4bad-ac72-9118c21a6bbd","Type":"ContainerStarted","Data":"f0e30ba833d125e1ff4937544915a513760f3d84f1a551224fa304fb171b68ea"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.413922 4745 generic.go:334] "Generic (PLEG): container finished" podID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerID="62376f3e2adfde3a28086793e63ec792d924786e8f0c5c649c0915e3672074da" exitCode=0 Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.413948 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerDied","Data":"62376f3e2adfde3a28086793e63ec792d924786e8f0c5c649c0915e3672074da"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.413965 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerStarted","Data":"89d4117d14a6e9f82cc110855239fde84850aabb4c691cf0e435cb94f84471b8"} Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.448478 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.788872 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:32 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:32 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:32 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.788925 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.945694 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-fn62p"] Jan 21 10:39:32 crc kubenswrapper[4745]: I0121 10:39:32.976665 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.017383 4745 patch_prober.go:28] interesting pod/console-f9d7485db-j4phh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.017446 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j4phh" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:39:33 crc kubenswrapper[4745]: W0121 10:39:33.057465 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9505ea9_d57f_4afa_add9_8e7e9eb84ece.slice/crio-cfbd0b9070595ba44db8c451050a71ad039fda195bb47a1f5c15ade0580cc54b WatchSource:0}: Error finding container cfbd0b9070595ba44db8c451050a71ad039fda195bb47a1f5c15ade0580cc54b: Status 404 returned error can't find the container with id cfbd0b9070595ba44db8c451050a71ad039fda195bb47a1f5c15ade0580cc54b Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.514387 4745 generic.go:334] "Generic (PLEG): container finished" podID="210bd7df-f463-4bad-ac72-9118c21a6bbd" containerID="f0e30ba833d125e1ff4937544915a513760f3d84f1a551224fa304fb171b68ea" exitCode=0 Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.514515 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"210bd7df-f463-4bad-ac72-9118c21a6bbd","Type":"ContainerDied","Data":"f0e30ba833d125e1ff4937544915a513760f3d84f1a551224fa304fb171b68ea"} Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.515729 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-9rsxp" Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.531155 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerStarted","Data":"2bd2319c39d0f933d64027ab90bea8ada9c5595c010c7f192397f2e7c1c05f11"} Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.550076 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" event={"ID":"f9505ea9-d57f-4afa-add9-8e7e9eb84ece","Type":"ContainerStarted","Data":"cfbd0b9070595ba44db8c451050a71ad039fda195bb47a1f5c15ade0580cc54b"} Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.707866 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=5.707839287 podStartE2EDuration="5.707839287s" podCreationTimestamp="2026-01-21 10:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:33.523863925 +0000 UTC m=+157.984651543" watchObservedRunningTime="2026-01-21 10:39:33.707839287 +0000 UTC m=+158.168626885" Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.744804 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:33 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:33 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:33 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.747125 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:33 crc kubenswrapper[4745]: I0121 10:39:33.893979 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.373769 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.374436 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.375047 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.375074 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.404748 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.448342 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n7p28" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.659598 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6eca715f-6fd2-4285-a3cd-c555ec589bc0","Type":"ContainerStarted","Data":"18a78cc14340f8afc965dc55fbd34abe1cafcfc0c3220dd36c687bb964ef64ea"} Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.663253 4745 generic.go:334] "Generic (PLEG): container finished" podID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerID="9d6642a8dfcd3b69281c538b151667b9f17f1809db62befb9e347a554708cfa5" exitCode=0 Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.663331 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerDied","Data":"9d6642a8dfcd3b69281c538b151667b9f17f1809db62befb9e347a554708cfa5"} Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.729041 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:34 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:34 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:34 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.729153 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.779681 4745 generic.go:334] "Generic (PLEG): container finished" podID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerID="a89d58e18f1e2458538c7f6c2bf76375e18a7925dc4e4b8e2faf0b66d5d5b5ee" exitCode=0 Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.779858 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerDied","Data":"a89d58e18f1e2458538c7f6c2bf76375e18a7925dc4e4b8e2faf0b66d5d5b5ee"} Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.817988 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-bchs9" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.820515 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" event={"ID":"f9505ea9-d57f-4afa-add9-8e7e9eb84ece","Type":"ContainerStarted","Data":"74909c1499cbaa004be6a4c17fd4f24aed94532b43269cf62712935c9b072232"} Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.820565 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:34 crc kubenswrapper[4745]: I0121 10:39:34.903384 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" podStartSLOduration=135.903354534 podStartE2EDuration="2m15.903354534s" podCreationTimestamp="2026-01-21 10:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:34.90092563 +0000 UTC m=+159.361713238" watchObservedRunningTime="2026-01-21 10:39:34.903354534 +0000 UTC m=+159.364142142" Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.406123 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.523992 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.735342 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:35 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:35 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:35 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.735425 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.898975 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6eca715f-6fd2-4285-a3cd-c555ec589bc0","Type":"ContainerStarted","Data":"a05ee73bab48a99ae19a267613804e904b8f7cf0a35d30c30b92ae6162e58f0f"} Jan 21 10:39:35 crc kubenswrapper[4745]: I0121 10:39:35.959376 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=4.959352723 podStartE2EDuration="4.959352723s" podCreationTimestamp="2026-01-21 10:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:35.947787578 +0000 UTC m=+160.408575176" watchObservedRunningTime="2026-01-21 10:39:35.959352723 +0000 UTC m=+160.420140321" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.515036 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.693934 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access\") pod \"210bd7df-f463-4bad-ac72-9118c21a6bbd\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.694080 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir\") pod \"210bd7df-f463-4bad-ac72-9118c21a6bbd\" (UID: \"210bd7df-f463-4bad-ac72-9118c21a6bbd\") " Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.694485 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "210bd7df-f463-4bad-ac72-9118c21a6bbd" (UID: "210bd7df-f463-4bad-ac72-9118c21a6bbd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.708634 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "210bd7df-f463-4bad-ac72-9118c21a6bbd" (UID: "210bd7df-f463-4bad-ac72-9118c21a6bbd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.726180 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:36 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:36 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:36 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.726259 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.796825 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/210bd7df-f463-4bad-ac72-9118c21a6bbd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.796868 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/210bd7df-f463-4bad-ac72-9118c21a6bbd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.915665 4745 generic.go:334] "Generic (PLEG): container finished" podID="6eca715f-6fd2-4285-a3cd-c555ec589bc0" containerID="a05ee73bab48a99ae19a267613804e904b8f7cf0a35d30c30b92ae6162e58f0f" exitCode=0 Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.915770 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6eca715f-6fd2-4285-a3cd-c555ec589bc0","Type":"ContainerDied","Data":"a05ee73bab48a99ae19a267613804e904b8f7cf0a35d30c30b92ae6162e58f0f"} Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.922936 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"210bd7df-f463-4bad-ac72-9118c21a6bbd","Type":"ContainerDied","Data":"f42da67936cdd15e0f2f67638c6e946bf7f40a69b5d1d9759a8a44d7ce8881ce"} Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.922989 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f42da67936cdd15e0f2f67638c6e946bf7f40a69b5d1d9759a8a44d7ce8881ce" Jan 21 10:39:36 crc kubenswrapper[4745]: I0121 10:39:36.923122 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 21 10:39:37 crc kubenswrapper[4745]: I0121 10:39:37.726053 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:37 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:37 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:37 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:37 crc kubenswrapper[4745]: I0121 10:39:37.726704 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:38 crc kubenswrapper[4745]: I0121 10:39:38.724517 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:38 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:38 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:38 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:38 crc kubenswrapper[4745]: I0121 10:39:38.724665 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.339372 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.431916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir\") pod \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.432221 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access\") pod \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\" (UID: \"6eca715f-6fd2-4285-a3cd-c555ec589bc0\") " Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.433755 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6eca715f-6fd2-4285-a3cd-c555ec589bc0" (UID: "6eca715f-6fd2-4285-a3cd-c555ec589bc0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.458210 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6eca715f-6fd2-4285-a3cd-c555ec589bc0" (UID: "6eca715f-6fd2-4285-a3cd-c555ec589bc0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.533980 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.534012 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6eca715f-6fd2-4285-a3cd-c555ec589bc0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.726211 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:39 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:39 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:39 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:39 crc kubenswrapper[4745]: I0121 10:39:39.726341 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:40 crc kubenswrapper[4745]: I0121 10:39:40.199658 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"6eca715f-6fd2-4285-a3cd-c555ec589bc0","Type":"ContainerDied","Data":"18a78cc14340f8afc965dc55fbd34abe1cafcfc0c3220dd36c687bb964ef64ea"} Jan 21 10:39:40 crc kubenswrapper[4745]: I0121 10:39:40.199729 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18a78cc14340f8afc965dc55fbd34abe1cafcfc0c3220dd36c687bb964ef64ea" Jan 21 10:39:40 crc kubenswrapper[4745]: I0121 10:39:40.199811 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 21 10:39:40 crc kubenswrapper[4745]: I0121 10:39:40.728166 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:40 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:40 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:40 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:40 crc kubenswrapper[4745]: I0121 10:39:40.728287 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:41 crc kubenswrapper[4745]: I0121 10:39:41.724642 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:41 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:41 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:41 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:41 crc kubenswrapper[4745]: I0121 10:39:41.724970 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.360304 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.397691 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df21a803-8072-4f8f-8f3a-00267f9c3419-metrics-certs\") pod \"network-metrics-daemon-px52r\" (UID: \"df21a803-8072-4f8f-8f3a-00267f9c3419\") " pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.428720 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-px52r" Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.724445 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:42 crc kubenswrapper[4745]: [-]has-synced failed: reason withheld Jan 21 10:39:42 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:42 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.724541 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.939222 4745 patch_prober.go:28] interesting pod/console-f9d7485db-j4phh container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:39:42 crc kubenswrapper[4745]: I0121 10:39:42.939305 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-j4phh" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:39:43 crc kubenswrapper[4745]: I0121 10:39:43.320063 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-px52r"] Jan 21 10:39:43 crc kubenswrapper[4745]: W0121 10:39:43.357932 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf21a803_8072_4f8f_8f3a_00267f9c3419.slice/crio-682bc99cb280743a75651f1155db1da8b9640bd0e3ae976bd812749c8345f2cf WatchSource:0}: Error finding container 682bc99cb280743a75651f1155db1da8b9640bd0e3ae976bd812749c8345f2cf: Status 404 returned error can't find the container with id 682bc99cb280743a75651f1155db1da8b9640bd0e3ae976bd812749c8345f2cf Jan 21 10:39:43 crc kubenswrapper[4745]: I0121 10:39:43.724581 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 21 10:39:43 crc kubenswrapper[4745]: [+]has-synced ok Jan 21 10:39:43 crc kubenswrapper[4745]: [+]process-running ok Jan 21 10:39:43 crc kubenswrapper[4745]: healthz check failed Jan 21 10:39:43 crc kubenswrapper[4745]: I0121 10:39:43.724656 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.335275 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.335993 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.337455 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.337523 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.337593 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.338390 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.338440 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.339806 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"ae7b6b58cb07db3a844fca358214187ef4d244c7505afacb0ddb2b08e02bd901"} pod="openshift-console/downloads-7954f5f757-gwvtn" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.339903 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" containerID="cri-o://ae7b6b58cb07db3a844fca358214187ef4d244c7505afacb0ddb2b08e02bd901" gracePeriod=2 Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.358704 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-px52r" event={"ID":"df21a803-8072-4f8f-8f3a-00267f9c3419","Type":"ContainerStarted","Data":"682bc99cb280743a75651f1155db1da8b9640bd0e3ae976bd812749c8345f2cf"} Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.729834 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:44 crc kubenswrapper[4745]: I0121 10:39:44.734441 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-tk5j9" Jan 21 10:39:45 crc kubenswrapper[4745]: I0121 10:39:45.374121 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-px52r" event={"ID":"df21a803-8072-4f8f-8f3a-00267f9c3419","Type":"ContainerStarted","Data":"a9502a286a1ecf0d30e29fc6e1383ee6dd44f2688ae5ffa7e863bb7594ea196d"} Jan 21 10:39:45 crc kubenswrapper[4745]: I0121 10:39:45.384498 4745 generic.go:334] "Generic (PLEG): container finished" podID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerID="ae7b6b58cb07db3a844fca358214187ef4d244c7505afacb0ddb2b08e02bd901" exitCode=0 Jan 21 10:39:45 crc kubenswrapper[4745]: I0121 10:39:45.384935 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwvtn" event={"ID":"fe3c7d57-12a7-426c-8c02-fe7f24949bae","Type":"ContainerDied","Data":"ae7b6b58cb07db3a844fca358214187ef4d244c7505afacb0ddb2b08e02bd901"} Jan 21 10:39:45 crc kubenswrapper[4745]: I0121 10:39:45.866963 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:39:45 crc kubenswrapper[4745]: I0121 10:39:45.867487 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:39:47 crc kubenswrapper[4745]: I0121 10:39:47.421746 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-gwvtn" event={"ID":"fe3c7d57-12a7-426c-8c02-fe7f24949bae","Type":"ContainerStarted","Data":"abb152413c86c3560e63ba8235f31cf58e6249773e4018f385959b3577d79a01"} Jan 21 10:39:47 crc kubenswrapper[4745]: I0121 10:39:47.422969 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:39:47 crc kubenswrapper[4745]: I0121 10:39:47.423322 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:47 crc kubenswrapper[4745]: I0121 10:39:47.423386 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:47 crc kubenswrapper[4745]: I0121 10:39:47.434272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-px52r" event={"ID":"df21a803-8072-4f8f-8f3a-00267f9c3419","Type":"ContainerStarted","Data":"799c3e46dd90d008e629c8e9da655d754ce5892e92132a0a8febddbdcd616739"} Jan 21 10:39:48 crc kubenswrapper[4745]: I0121 10:39:48.453670 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:48 crc kubenswrapper[4745]: I0121 10:39:48.454312 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:48 crc kubenswrapper[4745]: I0121 10:39:48.482564 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-px52r" podStartSLOduration=150.482523364 podStartE2EDuration="2m30.482523364s" podCreationTimestamp="2026-01-21 10:37:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:39:48.474842362 +0000 UTC m=+172.935629950" watchObservedRunningTime="2026-01-21 10:39:48.482523364 +0000 UTC m=+172.943310952" Jan 21 10:39:51 crc kubenswrapper[4745]: I0121 10:39:51.375197 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:39:52 crc kubenswrapper[4745]: I0121 10:39:52.937820 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:52 crc kubenswrapper[4745]: I0121 10:39:52.942961 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:39:54 crc kubenswrapper[4745]: I0121 10:39:54.339454 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:54 crc kubenswrapper[4745]: I0121 10:39:54.339561 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Jan 21 10:39:54 crc kubenswrapper[4745]: I0121 10:39:54.347691 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:54 crc kubenswrapper[4745]: I0121 10:39:54.347627 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Jan 21 10:39:55 crc kubenswrapper[4745]: I0121 10:39:55.479202 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" Jan 21 10:39:55 crc kubenswrapper[4745]: I0121 10:39:55.690060 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:39:55 crc kubenswrapper[4745]: I0121 10:39:55.690281 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" containerID="cri-o://841613b74e80c8e2a1ee24f5fe43aa3c38eacca2977ac18660bcd58ba1de19cb" gracePeriod=30 Jan 21 10:39:55 crc kubenswrapper[4745]: I0121 10:39:55.803903 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:39:55 crc kubenswrapper[4745]: I0121 10:39:55.804506 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" containerID="cri-o://91779cd83f9cc81c41e34014cf49576a02007a9fb25c7c5e6faa2b9c152137a1" gracePeriod=30 Jan 21 10:39:56 crc kubenswrapper[4745]: I0121 10:39:56.593991 4745 generic.go:334] "Generic (PLEG): container finished" podID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerID="841613b74e80c8e2a1ee24f5fe43aa3c38eacca2977ac18660bcd58ba1de19cb" exitCode=0 Jan 21 10:39:56 crc kubenswrapper[4745]: I0121 10:39:56.594084 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" event={"ID":"03658e3a-6a55-4326-9ab1-9ff0583f55ed","Type":"ContainerDied","Data":"841613b74e80c8e2a1ee24f5fe43aa3c38eacca2977ac18660bcd58ba1de19cb"} Jan 21 10:39:56 crc kubenswrapper[4745]: I0121 10:39:56.597509 4745 generic.go:334] "Generic (PLEG): container finished" podID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerID="91779cd83f9cc81c41e34014cf49576a02007a9fb25c7c5e6faa2b9c152137a1" exitCode=0 Jan 21 10:39:56 crc kubenswrapper[4745]: I0121 10:39:56.597578 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" event={"ID":"a25f7cf6-d63e-48f4-a43a-623ee2cf7908","Type":"ContainerDied","Data":"91779cd83f9cc81c41e34014cf49576a02007a9fb25c7c5e6faa2b9c152137a1"} Jan 21 10:40:02 crc kubenswrapper[4745]: I0121 10:40:02.479768 4745 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pbfgr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 21 10:40:02 crc kubenswrapper[4745]: I0121 10:40:02.480347 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 21 10:40:02 crc kubenswrapper[4745]: I0121 10:40:02.702779 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x6jmv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 21 10:40:02 crc kubenswrapper[4745]: I0121 10:40:02.702851 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 21 10:40:04 crc kubenswrapper[4745]: I0121 10:40:04.340711 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-gwvtn" Jan 21 10:40:04 crc kubenswrapper[4745]: I0121 10:40:04.757640 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.493701 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:40:06 crc kubenswrapper[4745]: E0121 10:40:06.494102 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6eca715f-6fd2-4285-a3cd-c555ec589bc0" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.494120 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6eca715f-6fd2-4285-a3cd-c555ec589bc0" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: E0121 10:40:06.494140 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210bd7df-f463-4bad-ac72-9118c21a6bbd" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.494148 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="210bd7df-f463-4bad-ac72-9118c21a6bbd" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.494278 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="210bd7df-f463-4bad-ac72-9118c21a6bbd" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.494297 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6eca715f-6fd2-4285-a3cd-c555ec589bc0" containerName="pruner" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.494951 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.538361 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.538688 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.545166 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.662335 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.662380 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.763621 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.763672 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.763771 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.789065 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:06 crc kubenswrapper[4745]: I0121 10:40:06.853009 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.696952 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.700654 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.712787 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.766779 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.766889 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.767138 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.868377 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.868446 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.868486 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.868486 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.868715 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:10 crc kubenswrapper[4745]: I0121 10:40:10.900856 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access\") pod \"installer-9-crc\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:11 crc kubenswrapper[4745]: I0121 10:40:11.028100 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:40:13 crc kubenswrapper[4745]: I0121 10:40:13.480017 4745 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pbfgr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:40:13 crc kubenswrapper[4745]: I0121 10:40:13.480094 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:40:13 crc kubenswrapper[4745]: I0121 10:40:13.703085 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x6jmv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:40:13 crc kubenswrapper[4745]: I0121 10:40:13.703272 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:40:15 crc kubenswrapper[4745]: I0121 10:40:15.868560 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:40:15 crc kubenswrapper[4745]: I0121 10:40:15.869247 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.215484 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.215683 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r8vp4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-sbg8m_openshift-marketplace(bfaacdad-12f1-4904-96db-f24427117da4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.216872 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-sbg8m" podUID="bfaacdad-12f1-4904-96db-f24427117da4" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.380567 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.380777 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-249dm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-52d7q_openshift-marketplace(be69561a-c25a-4e96-b75f-4f5664c5f2c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:18 crc kubenswrapper[4745]: E0121 10:40:18.381983 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-52d7q" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" Jan 21 10:40:22 crc kubenswrapper[4745]: E0121 10:40:22.881607 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-52d7q" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" Jan 21 10:40:22 crc kubenswrapper[4745]: E0121 10:40:22.881767 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-sbg8m" podUID="bfaacdad-12f1-4904-96db-f24427117da4" Jan 21 10:40:22 crc kubenswrapper[4745]: E0121 10:40:22.973206 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:40:22 crc kubenswrapper[4745]: E0121 10:40:22.973363 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lvhd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-7989r_openshift-marketplace(131ae967-4e30-4b48-a2c7-fdcfc1109db8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:22 crc kubenswrapper[4745]: E0121 10:40:22.974729 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-7989r" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" Jan 21 10:40:23 crc kubenswrapper[4745]: I0121 10:40:23.479413 4745 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-pbfgr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:40:23 crc kubenswrapper[4745]: I0121 10:40:23.479515 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:40:23 crc kubenswrapper[4745]: I0121 10:40:23.702851 4745 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-x6jmv container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 10:40:23 crc kubenswrapper[4745]: I0121 10:40:23.702959 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 10:40:24 crc kubenswrapper[4745]: E0121 10:40:24.320494 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-7989r" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" Jan 21 10:40:24 crc kubenswrapper[4745]: E0121 10:40:24.387951 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:40:24 crc kubenswrapper[4745]: E0121 10:40:24.388106 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fj44x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-c6pc4_openshift-marketplace(9d721ed0-4c33-4912-8973-e583db1e2075): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:24 crc kubenswrapper[4745]: E0121 10:40:24.389293 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-c6pc4" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" Jan 21 10:40:25 crc kubenswrapper[4745]: E0121 10:40:25.961948 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-c6pc4" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.060311 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.080004 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.080705 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.080871 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hk28m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-rgqts_openshift-marketplace(d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.082213 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-rgqts" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.107962 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.108174 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.108186 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.108206 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.108213 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.108299 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" containerName="controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.108311 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" containerName="route-controller-manager" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.108679 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.133556 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.133666 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hrwxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-fn62p_openshift-marketplace(84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.134716 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-fn62p" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.141440 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.193737 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.194244 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mwfpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-lgfp9_openshift-marketplace(58df78fb-8f34-4442-8547-cacf761708dd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.195817 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-lgfp9" podUID="58df78fb-8f34-4442-8547-cacf761708dd" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.201523 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.204714 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zd5b4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lctvc_openshift-marketplace(fa834975-c760-4bcb-b0ee-e2f79ade8bd8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.205864 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lctvc" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.235887 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca\") pod \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.235976 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert\") pod \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236002 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config\") pod \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236035 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knxkf\" (UniqueName: \"kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf\") pod \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\" (UID: \"a25f7cf6-d63e-48f4-a43a-623ee2cf7908\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236063 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shv7w\" (UniqueName: \"kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w\") pod \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236096 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles\") pod \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236128 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca\") pod \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236192 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config\") pod \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236247 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert\") pod \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\" (UID: \"03658e3a-6a55-4326-9ab1-9ff0583f55ed\") " Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236484 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236521 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236586 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.236614 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.237774 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca" (OuterVolumeSpecName: "client-ca") pod "03658e3a-6a55-4326-9ab1-9ff0583f55ed" (UID: "03658e3a-6a55-4326-9ab1-9ff0583f55ed"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.238716 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca" (OuterVolumeSpecName: "client-ca") pod "a25f7cf6-d63e-48f4-a43a-623ee2cf7908" (UID: "a25f7cf6-d63e-48f4-a43a-623ee2cf7908"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.239493 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config" (OuterVolumeSpecName: "config") pod "03658e3a-6a55-4326-9ab1-9ff0583f55ed" (UID: "03658e3a-6a55-4326-9ab1-9ff0583f55ed"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.254583 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config" (OuterVolumeSpecName: "config") pod "a25f7cf6-d63e-48f4-a43a-623ee2cf7908" (UID: "a25f7cf6-d63e-48f4-a43a-623ee2cf7908"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.271293 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03658e3a-6a55-4326-9ab1-9ff0583f55ed" (UID: "03658e3a-6a55-4326-9ab1-9ff0583f55ed"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.277820 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w" (OuterVolumeSpecName: "kube-api-access-shv7w") pod "03658e3a-6a55-4326-9ab1-9ff0583f55ed" (UID: "03658e3a-6a55-4326-9ab1-9ff0583f55ed"). InnerVolumeSpecName "kube-api-access-shv7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.282164 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a25f7cf6-d63e-48f4-a43a-623ee2cf7908" (UID: "a25f7cf6-d63e-48f4-a43a-623ee2cf7908"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.284029 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "03658e3a-6a55-4326-9ab1-9ff0583f55ed" (UID: "03658e3a-6a55-4326-9ab1-9ff0583f55ed"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.284616 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf" (OuterVolumeSpecName: "kube-api-access-knxkf") pod "a25f7cf6-d63e-48f4-a43a-623ee2cf7908" (UID: "a25f7cf6-d63e-48f4-a43a-623ee2cf7908"). InnerVolumeSpecName "kube-api-access-knxkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337233 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337305 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337351 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337381 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337440 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337450 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337459 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337467 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knxkf\" (UniqueName: \"kubernetes.io/projected/a25f7cf6-d63e-48f4-a43a-623ee2cf7908-kube-api-access-knxkf\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337477 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shv7w\" (UniqueName: \"kubernetes.io/projected/03658e3a-6a55-4326-9ab1-9ff0583f55ed-kube-api-access-shv7w\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337485 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337495 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337502 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03658e3a-6a55-4326-9ab1-9ff0583f55ed-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.337510 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03658e3a-6a55-4326-9ab1-9ff0583f55ed-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.338609 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.339456 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.345688 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.361769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn\") pod \"route-controller-manager-7df6c4f584-jxgrc\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.438503 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.499701 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.571150 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.726446 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.842112 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"04bbf215-722d-4e3d-bc35-99fd1f673a02","Type":"ContainerStarted","Data":"9ab2ea856d67f260892a3459129a338f7ba50193ff33e4e52884553410cd300a"} Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.844518 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" event={"ID":"b7f663f7-35ba-4c54-a326-27891aeb51e4","Type":"ContainerStarted","Data":"017d49dedf9269d11899d68d379f588489a04f44ffcf8edccbb87cecf3e6ad4f"} Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.849175 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.849213 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr" event={"ID":"a25f7cf6-d63e-48f4-a43a-623ee2cf7908","Type":"ContainerDied","Data":"9a39867b83fd30970030b47957a50a4c4d63c554968d60b4792796a1473b12fe"} Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.849991 4745 scope.go:117] "RemoveContainer" containerID="91779cd83f9cc81c41e34014cf49576a02007a9fb25c7c5e6faa2b9c152137a1" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.852844 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"26a578ee-30c1-4393-aab3-eb32fdc0a700","Type":"ContainerStarted","Data":"26274a095b327fb404d44ccaa0495386171cfca9cd7821e4f6a2c2b74c7ac923"} Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.863288 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.867001 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lctvc" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.867075 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-x6jmv" event={"ID":"03658e3a-6a55-4326-9ab1-9ff0583f55ed","Type":"ContainerDied","Data":"57020ecef1aac18819510da788f44b895d41e4d921390bb9b51f6397ba43d904"} Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.867319 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-rgqts" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.875572 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-lgfp9" podUID="58df78fb-8f34-4442-8547-cacf761708dd" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.875588 4745 scope.go:117] "RemoveContainer" containerID="841613b74e80c8e2a1ee24f5fe43aa3c38eacca2977ac18660bcd58ba1de19cb" Jan 21 10:40:26 crc kubenswrapper[4745]: E0121 10:40:26.875832 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-fn62p" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" Jan 21 10:40:26 crc kubenswrapper[4745]: I0121 10:40:26.995710 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.003479 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-pbfgr"] Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.010838 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.013688 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-x6jmv"] Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.886442 4745 generic.go:334] "Generic (PLEG): container finished" podID="26a578ee-30c1-4393-aab3-eb32fdc0a700" containerID="30a36a176fcaba3934c9b6df009c8c4ef2971185ef10b6407496a2b403213bcb" exitCode=0 Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.886760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"26a578ee-30c1-4393-aab3-eb32fdc0a700","Type":"ContainerDied","Data":"30a36a176fcaba3934c9b6df009c8c4ef2971185ef10b6407496a2b403213bcb"} Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.891612 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"04bbf215-722d-4e3d-bc35-99fd1f673a02","Type":"ContainerStarted","Data":"554d4e7d11b67e7c5320d439bba4063afc80aa6782fd46c36a5e506f8332dbf0"} Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.894198 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" event={"ID":"b7f663f7-35ba-4c54-a326-27891aeb51e4","Type":"ContainerStarted","Data":"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61"} Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.894473 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.900186 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.938729 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=17.938708905 podStartE2EDuration="17.938708905s" podCreationTimestamp="2026-01-21 10:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:27.935772134 +0000 UTC m=+212.396559742" watchObservedRunningTime="2026-01-21 10:40:27.938708905 +0000 UTC m=+212.399496503" Jan 21 10:40:27 crc kubenswrapper[4745]: I0121 10:40:27.962376 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" podStartSLOduration=12.962353806 podStartE2EDuration="12.962353806s" podCreationTimestamp="2026-01-21 10:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:27.959005251 +0000 UTC m=+212.419792869" watchObservedRunningTime="2026-01-21 10:40:27.962353806 +0000 UTC m=+212.423141404" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.013679 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03658e3a-6a55-4326-9ab1-9ff0583f55ed" path="/var/lib/kubelet/pods/03658e3a-6a55-4326-9ab1-9ff0583f55ed/volumes" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.014228 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25f7cf6-d63e-48f4-a43a-623ee2cf7908" path="/var/lib/kubelet/pods/a25f7cf6-d63e-48f4-a43a-623ee2cf7908/volumes" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.059249 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.159686 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.160667 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.165169 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.165333 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.178322 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.178888 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.179146 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.179378 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.179691 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.193343 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.281352 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.281630 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6flsx\" (UniqueName: \"kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.281759 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.281896 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.281980 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.384293 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.384367 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.384437 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.384493 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6flsx\" (UniqueName: \"kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.384581 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.386273 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.386315 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.387134 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.397682 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.404769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6flsx\" (UniqueName: \"kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx\") pod \"controller-manager-884444db4-5s4xv\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.476179 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.683466 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:28 crc kubenswrapper[4745]: I0121 10:40:28.905352 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" event={"ID":"5b85ba9f-f076-4546-86d7-1fa02a52e80c","Type":"ContainerStarted","Data":"9a062694a3e0252553ea9e60c5f32be597edb55e283b02495d7913192f76a3ca"} Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.153793 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.307916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir\") pod \"26a578ee-30c1-4393-aab3-eb32fdc0a700\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.308029 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access\") pod \"26a578ee-30c1-4393-aab3-eb32fdc0a700\" (UID: \"26a578ee-30c1-4393-aab3-eb32fdc0a700\") " Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.308067 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "26a578ee-30c1-4393-aab3-eb32fdc0a700" (UID: "26a578ee-30c1-4393-aab3-eb32fdc0a700"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.308324 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26a578ee-30c1-4393-aab3-eb32fdc0a700-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.317207 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "26a578ee-30c1-4393-aab3-eb32fdc0a700" (UID: "26a578ee-30c1-4393-aab3-eb32fdc0a700"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.409806 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26a578ee-30c1-4393-aab3-eb32fdc0a700-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.925948 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"26a578ee-30c1-4393-aab3-eb32fdc0a700","Type":"ContainerDied","Data":"26274a095b327fb404d44ccaa0495386171cfca9cd7821e4f6a2c2b74c7ac923"} Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.926000 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26274a095b327fb404d44ccaa0495386171cfca9cd7821e4f6a2c2b74c7ac923" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.926123 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.930177 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" event={"ID":"5b85ba9f-f076-4546-86d7-1fa02a52e80c","Type":"ContainerStarted","Data":"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb"} Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.931660 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.940888 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:29 crc kubenswrapper[4745]: I0121 10:40:29.955688 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" podStartSLOduration=14.955669154 podStartE2EDuration="14.955669154s" podCreationTimestamp="2026-01-21 10:40:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:29.9474158 +0000 UTC m=+214.408203398" watchObservedRunningTime="2026-01-21 10:40:29.955669154 +0000 UTC m=+214.416456752" Jan 21 10:40:36 crc kubenswrapper[4745]: I0121 10:40:36.979035 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerStarted","Data":"8f5a903f3ffc943bd0071f511ba8272b534d9d33537b5ae029d8873e5af70599"} Jan 21 10:40:37 crc kubenswrapper[4745]: I0121 10:40:37.988101 4745 generic.go:334] "Generic (PLEG): container finished" podID="bfaacdad-12f1-4904-96db-f24427117da4" containerID="8f5a903f3ffc943bd0071f511ba8272b534d9d33537b5ae029d8873e5af70599" exitCode=0 Jan 21 10:40:37 crc kubenswrapper[4745]: I0121 10:40:37.988237 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerDied","Data":"8f5a903f3ffc943bd0071f511ba8272b534d9d33537b5ae029d8873e5af70599"} Jan 21 10:40:41 crc kubenswrapper[4745]: I0121 10:40:41.020996 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerStarted","Data":"a5d2bbd831a6c6cff749fbcd5933ba50ddae76cbac2267670ab20f03ca3a4036"} Jan 21 10:40:41 crc kubenswrapper[4745]: I0121 10:40:41.032027 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerStarted","Data":"7db6e1fcec09e6a97879daae2ea7f9aa33b8e7b0282dec6c5a7c0959245d9e4b"} Jan 21 10:40:41 crc kubenswrapper[4745]: I0121 10:40:41.049809 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerStarted","Data":"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54"} Jan 21 10:40:41 crc kubenswrapper[4745]: I0121 10:40:41.110586 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sbg8m" podStartSLOduration=4.859147966 podStartE2EDuration="1m14.110501827s" podCreationTimestamp="2026-01-21 10:39:27 +0000 UTC" firstStartedPulling="2026-01-21 10:39:31.291142175 +0000 UTC m=+155.751929783" lastFinishedPulling="2026-01-21 10:40:40.542496046 +0000 UTC m=+225.003283644" observedRunningTime="2026-01-21 10:40:41.10942123 +0000 UTC m=+225.570208838" watchObservedRunningTime="2026-01-21 10:40:41.110501827 +0000 UTC m=+225.571289425" Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.063873 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerStarted","Data":"437b075e76a5838a8308b2c1fbc45bd893a643a8fde369b90f79871483ece477"} Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.072977 4745 generic.go:334] "Generic (PLEG): container finished" podID="58df78fb-8f34-4442-8547-cacf761708dd" containerID="aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f" exitCode=0 Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.073060 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerDied","Data":"aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f"} Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.078413 4745 generic.go:334] "Generic (PLEG): container finished" podID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerID="a5d2bbd831a6c6cff749fbcd5933ba50ddae76cbac2267670ab20f03ca3a4036" exitCode=0 Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.078592 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerDied","Data":"a5d2bbd831a6c6cff749fbcd5933ba50ddae76cbac2267670ab20f03ca3a4036"} Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.087542 4745 generic.go:334] "Generic (PLEG): container finished" podID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerID="eed807207bb05a33b2d34605f3c43a6287a86d10f97e199fd07e5d504de683ac" exitCode=0 Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.087634 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerDied","Data":"eed807207bb05a33b2d34605f3c43a6287a86d10f97e199fd07e5d504de683ac"} Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.113591 4745 generic.go:334] "Generic (PLEG): container finished" podID="9d721ed0-4c33-4912-8973-e583db1e2075" containerID="30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54" exitCode=0 Jan 21 10:40:42 crc kubenswrapper[4745]: I0121 10:40:42.114747 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerDied","Data":"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54"} Jan 21 10:40:43 crc kubenswrapper[4745]: I0121 10:40:43.147974 4745 generic.go:334] "Generic (PLEG): container finished" podID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerID="437b075e76a5838a8308b2c1fbc45bd893a643a8fde369b90f79871483ece477" exitCode=0 Jan 21 10:40:43 crc kubenswrapper[4745]: I0121 10:40:43.148729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerDied","Data":"437b075e76a5838a8308b2c1fbc45bd893a643a8fde369b90f79871483ece477"} Jan 21 10:40:43 crc kubenswrapper[4745]: I0121 10:40:43.169155 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerStarted","Data":"3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada"} Jan 21 10:40:43 crc kubenswrapper[4745]: I0121 10:40:43.200521 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7989r" podStartSLOduration=5.075192433 podStartE2EDuration="1m13.200491684s" podCreationTimestamp="2026-01-21 10:39:30 +0000 UTC" firstStartedPulling="2026-01-21 10:39:34.790896762 +0000 UTC m=+159.251684360" lastFinishedPulling="2026-01-21 10:40:42.916196013 +0000 UTC m=+227.376983611" observedRunningTime="2026-01-21 10:40:43.196460965 +0000 UTC m=+227.657248563" watchObservedRunningTime="2026-01-21 10:40:43.200491684 +0000 UTC m=+227.661279282" Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.439661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerStarted","Data":"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.442148 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerStarted","Data":"c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.444807 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerStarted","Data":"b9495423924e84574bcb461777225dcd4c4051d88c357c75687e746daac2df81"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.446685 4745 generic.go:334] "Generic (PLEG): container finished" podID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerID="071b7a6004358557713e215dd2c7d14d199c910919d090bd4d06dd50ea87ccec" exitCode=0 Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.446726 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerDied","Data":"071b7a6004358557713e215dd2c7d14d199c910919d090bd4d06dd50ea87ccec"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.454354 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerStarted","Data":"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.458964 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lgfp9" podStartSLOduration=5.683738734 podStartE2EDuration="1m17.458947991s" podCreationTimestamp="2026-01-21 10:39:27 +0000 UTC" firstStartedPulling="2026-01-21 10:39:31.17782241 +0000 UTC m=+155.638610008" lastFinishedPulling="2026-01-21 10:40:42.953031667 +0000 UTC m=+227.413819265" observedRunningTime="2026-01-21 10:40:44.45539907 +0000 UTC m=+228.916186668" watchObservedRunningTime="2026-01-21 10:40:44.458947991 +0000 UTC m=+228.919735589" Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.459657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerStarted","Data":"a3b0501f20681f73e27e834fbf44dcde68cee2f446973d728a211dc71d31a0a2"} Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.487093 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rgqts" podStartSLOduration=4.65699003 podStartE2EDuration="1m15.487079157s" podCreationTimestamp="2026-01-21 10:39:29 +0000 UTC" firstStartedPulling="2026-01-21 10:39:32.404829939 +0000 UTC m=+156.865617537" lastFinishedPulling="2026-01-21 10:40:43.234919066 +0000 UTC m=+227.695706664" observedRunningTime="2026-01-21 10:40:44.486631862 +0000 UTC m=+228.947419460" watchObservedRunningTime="2026-01-21 10:40:44.487079157 +0000 UTC m=+228.947866755" Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.517998 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c6pc4" podStartSLOduration=5.83402371 podStartE2EDuration="1m17.517981698s" podCreationTimestamp="2026-01-21 10:39:27 +0000 UTC" firstStartedPulling="2026-01-21 10:39:31.360468267 +0000 UTC m=+155.821255865" lastFinishedPulling="2026-01-21 10:40:43.044426255 +0000 UTC m=+227.505213853" observedRunningTime="2026-01-21 10:40:44.517405808 +0000 UTC m=+228.978193406" watchObservedRunningTime="2026-01-21 10:40:44.517981698 +0000 UTC m=+228.978769286" Jan 21 10:40:44 crc kubenswrapper[4745]: I0121 10:40:44.609497 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lctvc" podStartSLOduration=4.026635219 podStartE2EDuration="1m14.60948392s" podCreationTimestamp="2026-01-21 10:39:30 +0000 UTC" firstStartedPulling="2026-01-21 10:39:32.415424899 +0000 UTC m=+156.876212497" lastFinishedPulling="2026-01-21 10:40:42.9982736 +0000 UTC m=+227.459061198" observedRunningTime="2026-01-21 10:40:44.604219259 +0000 UTC m=+229.065006857" watchObservedRunningTime="2026-01-21 10:40:44.60948392 +0000 UTC m=+229.070271518" Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.471414 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerStarted","Data":"c765dc1d997c11db6920633421833c361eeba7f72d7e6bb7f8bab33263a2304d"} Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.866602 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.867461 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.867614 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.868480 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:40:45 crc kubenswrapper[4745]: I0121 10:40:45.868641 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a" gracePeriod=600 Jan 21 10:40:46 crc kubenswrapper[4745]: I0121 10:40:46.481120 4745 generic.go:334] "Generic (PLEG): container finished" podID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerID="b9495423924e84574bcb461777225dcd4c4051d88c357c75687e746daac2df81" exitCode=0 Jan 21 10:40:46 crc kubenswrapper[4745]: I0121 10:40:46.481204 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerDied","Data":"b9495423924e84574bcb461777225dcd4c4051d88c357c75687e746daac2df81"} Jan 21 10:40:46 crc kubenswrapper[4745]: I0121 10:40:46.508612 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-52d7q" podStartSLOduration=5.954345061 podStartE2EDuration="1m19.508582802s" podCreationTimestamp="2026-01-21 10:39:27 +0000 UTC" firstStartedPulling="2026-01-21 10:39:31.276789585 +0000 UTC m=+155.737577183" lastFinishedPulling="2026-01-21 10:40:44.831027336 +0000 UTC m=+229.291814924" observedRunningTime="2026-01-21 10:40:45.509564522 +0000 UTC m=+229.970352120" watchObservedRunningTime="2026-01-21 10:40:46.508582802 +0000 UTC m=+230.969370400" Jan 21 10:40:47 crc kubenswrapper[4745]: I0121 10:40:47.490577 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a" exitCode=0 Jan 21 10:40:47 crc kubenswrapper[4745]: I0121 10:40:47.490676 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a"} Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.050359 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.051468 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.294575 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.294644 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.411222 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.411967 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.472729 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.472798 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.491669 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.491736 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.527717 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.543702 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.562047 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:40:48 crc kubenswrapper[4745]: I0121 10:40:48.604063 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:49 crc kubenswrapper[4745]: I0121 10:40:49.242095 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:40:49 crc kubenswrapper[4745]: I0121 10:40:49.552233 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.178402 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.179054 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.246859 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.512729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46"} Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.513605 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lgfp9" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="registry-server" containerID="cri-o://deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1" gracePeriod=2 Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.558729 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.648583 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.648703 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.701320 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.955286 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:40:50 crc kubenswrapper[4745]: I0121 10:40:50.955346 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.003959 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.403924 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.521007 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerStarted","Data":"f49e3a4d6172f3f5d53c6744942d36dd66c7439cd0986f940c625c6fdd8152eb"} Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.522995 4745 generic.go:334] "Generic (PLEG): container finished" podID="58df78fb-8f34-4442-8547-cacf761708dd" containerID="deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1" exitCode=0 Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.523119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerDied","Data":"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1"} Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.523118 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lgfp9" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.523169 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lgfp9" event={"ID":"58df78fb-8f34-4442-8547-cacf761708dd","Type":"ContainerDied","Data":"f72e100195cff6a4cb9e98a5466c83bf5e8566c94130fe4bae9d0611f24e76c2"} Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.523193 4745 scope.go:117] "RemoveContainer" containerID="deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.549897 4745 scope.go:117] "RemoveContainer" containerID="aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.551883 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-fn62p" podStartSLOduration=5.2786587130000004 podStartE2EDuration="1m21.551851195s" podCreationTimestamp="2026-01-21 10:39:30 +0000 UTC" firstStartedPulling="2026-01-21 10:39:34.671428724 +0000 UTC m=+159.132216322" lastFinishedPulling="2026-01-21 10:40:50.944621206 +0000 UTC m=+235.405408804" observedRunningTime="2026-01-21 10:40:51.542946799 +0000 UTC m=+236.003734407" watchObservedRunningTime="2026-01-21 10:40:51.551851195 +0000 UTC m=+236.012638793" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.566967 4745 scope.go:117] "RemoveContainer" containerID="84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.581254 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.588719 4745 scope.go:117] "RemoveContainer" containerID="deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1" Jan 21 10:40:51 crc kubenswrapper[4745]: E0121 10:40:51.595232 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1\": container with ID starting with deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1 not found: ID does not exist" containerID="deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.595304 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1"} err="failed to get container status \"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1\": rpc error: code = NotFound desc = could not find container \"deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1\": container with ID starting with deaa61b00f5fef14906c769e7ea84116f6c0af1e939e6419c3eff689e9b006f1 not found: ID does not exist" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.595853 4745 scope.go:117] "RemoveContainer" containerID="aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.596914 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwfpl\" (UniqueName: \"kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl\") pod \"58df78fb-8f34-4442-8547-cacf761708dd\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.597067 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities\") pod \"58df78fb-8f34-4442-8547-cacf761708dd\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.597136 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content\") pod \"58df78fb-8f34-4442-8547-cacf761708dd\" (UID: \"58df78fb-8f34-4442-8547-cacf761708dd\") " Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.597383 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:40:51 crc kubenswrapper[4745]: E0121 10:40:51.598298 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f\": container with ID starting with aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f not found: ID does not exist" containerID="aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.598381 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f"} err="failed to get container status \"aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f\": rpc error: code = NotFound desc = could not find container \"aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f\": container with ID starting with aab0f26d506b001f2ad9d187d13e49152158d9f87db27116960b271ae3c18c5f not found: ID does not exist" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.598403 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities" (OuterVolumeSpecName: "utilities") pod "58df78fb-8f34-4442-8547-cacf761708dd" (UID: "58df78fb-8f34-4442-8547-cacf761708dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.598437 4745 scope.go:117] "RemoveContainer" containerID="84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491" Jan 21 10:40:51 crc kubenswrapper[4745]: E0121 10:40:51.599210 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491\": container with ID starting with 84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491 not found: ID does not exist" containerID="84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.599260 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491"} err="failed to get container status \"84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491\": rpc error: code = NotFound desc = could not find container \"84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491\": container with ID starting with 84656d9abcd36cf24043e5d1bf2fac3e6fab1173ef66c6db2a10af50eefc4491 not found: ID does not exist" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.612433 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl" (OuterVolumeSpecName: "kube-api-access-mwfpl") pod "58df78fb-8f34-4442-8547-cacf761708dd" (UID: "58df78fb-8f34-4442-8547-cacf761708dd"). InnerVolumeSpecName "kube-api-access-mwfpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.658786 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.659128 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sbg8m" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="registry-server" containerID="cri-o://7db6e1fcec09e6a97879daae2ea7f9aa33b8e7b0282dec6c5a7c0959245d9e4b" gracePeriod=2 Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.678137 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58df78fb-8f34-4442-8547-cacf761708dd" (UID: "58df78fb-8f34-4442-8547-cacf761708dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.700826 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwfpl\" (UniqueName: \"kubernetes.io/projected/58df78fb-8f34-4442-8547-cacf761708dd-kube-api-access-mwfpl\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.700868 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.700878 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58df78fb-8f34-4442-8547-cacf761708dd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.862607 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:40:51 crc kubenswrapper[4745]: I0121 10:40:51.863039 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lgfp9"] Jan 21 10:40:52 crc kubenswrapper[4745]: I0121 10:40:52.008184 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58df78fb-8f34-4442-8547-cacf761708dd" path="/var/lib/kubelet/pods/58df78fb-8f34-4442-8547-cacf761708dd/volumes" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.092928 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" containerID="cri-o://b9c1cc26369606b702a2a7976adfea9d28cf584161d0e9a2206b2e356ce23280" gracePeriod=15 Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.555142 4745 generic.go:334] "Generic (PLEG): container finished" podID="bfaacdad-12f1-4904-96db-f24427117da4" containerID="7db6e1fcec09e6a97879daae2ea7f9aa33b8e7b0282dec6c5a7c0959245d9e4b" exitCode=0 Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.555365 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerDied","Data":"7db6e1fcec09e6a97879daae2ea7f9aa33b8e7b0282dec6c5a7c0959245d9e4b"} Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.555708 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sbg8m" event={"ID":"bfaacdad-12f1-4904-96db-f24427117da4","Type":"ContainerDied","Data":"53614e5a61ee09a6babb8e0b2766ec5f1c666b8e3c1212d3737a79e919c86638"} Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.555730 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53614e5a61ee09a6babb8e0b2766ec5f1c666b8e3c1212d3737a79e919c86638" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.555736 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.558219 4745 generic.go:334] "Generic (PLEG): container finished" podID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerID="b9c1cc26369606b702a2a7976adfea9d28cf584161d0e9a2206b2e356ce23280" exitCode=0 Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.558263 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" event={"ID":"9896d393-c134-4abe-ac04-1da7e6ea3aed","Type":"ContainerDied","Data":"b9c1cc26369606b702a2a7976adfea9d28cf584161d0e9a2206b2e356ce23280"} Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.642127 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities\") pod \"bfaacdad-12f1-4904-96db-f24427117da4\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.642178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content\") pod \"bfaacdad-12f1-4904-96db-f24427117da4\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.642218 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vp4\" (UniqueName: \"kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4\") pod \"bfaacdad-12f1-4904-96db-f24427117da4\" (UID: \"bfaacdad-12f1-4904-96db-f24427117da4\") " Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.643660 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities" (OuterVolumeSpecName: "utilities") pod "bfaacdad-12f1-4904-96db-f24427117da4" (UID: "bfaacdad-12f1-4904-96db-f24427117da4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.649734 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4" (OuterVolumeSpecName: "kube-api-access-r8vp4") pod "bfaacdad-12f1-4904-96db-f24427117da4" (UID: "bfaacdad-12f1-4904-96db-f24427117da4"). InnerVolumeSpecName "kube-api-access-r8vp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.706983 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfaacdad-12f1-4904-96db-f24427117da4" (UID: "bfaacdad-12f1-4904-96db-f24427117da4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.743633 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.743692 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vp4\" (UniqueName: \"kubernetes.io/projected/bfaacdad-12f1-4904-96db-f24427117da4-kube-api-access-r8vp4\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:53 crc kubenswrapper[4745]: I0121 10:40:53.743711 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfaacdad-12f1-4904-96db-f24427117da4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.043384 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.044029 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lctvc" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="registry-server" containerID="cri-o://a3b0501f20681f73e27e834fbf44dcde68cee2f446973d728a211dc71d31a0a2" gracePeriod=2 Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.329679 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.453867 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.453967 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454020 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454061 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454110 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454181 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454215 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454265 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454303 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454349 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt29p\" (UniqueName: \"kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454373 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454433 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454473 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.454512 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs\") pod \"9896d393-c134-4abe-ac04-1da7e6ea3aed\" (UID: \"9896d393-c134-4abe-ac04-1da7e6ea3aed\") " Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.455787 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.455868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.456839 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.459699 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.460312 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.460895 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.461439 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.461920 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.462165 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.463779 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p" (OuterVolumeSpecName: "kube-api-access-mt29p") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "kube-api-access-mt29p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.464240 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.464382 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.465106 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.466804 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9896d393-c134-4abe-ac04-1da7e6ea3aed" (UID: "9896d393-c134-4abe-ac04-1da7e6ea3aed"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557867 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557909 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557925 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557937 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557950 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557960 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557971 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557983 4745 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.557995 4745 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.558006 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.558017 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.558027 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt29p\" (UniqueName: \"kubernetes.io/projected/9896d393-c134-4abe-ac04-1da7e6ea3aed-kube-api-access-mt29p\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.558038 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.558048 4745 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9896d393-c134-4abe-ac04-1da7e6ea3aed-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.568459 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.568420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5ck2f" event={"ID":"9896d393-c134-4abe-ac04-1da7e6ea3aed","Type":"ContainerDied","Data":"4823ba21160b376dc2ed3287dd45d0383278ee7036269e06c92e3fb4d3ae6e70"} Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.569065 4745 scope.go:117] "RemoveContainer" containerID="b9c1cc26369606b702a2a7976adfea9d28cf584161d0e9a2206b2e356ce23280" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.568466 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sbg8m" Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.597699 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.603341 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sbg8m"] Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.615583 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:40:54 crc kubenswrapper[4745]: I0121 10:40:54.619865 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5ck2f"] Jan 21 10:40:55 crc kubenswrapper[4745]: I0121 10:40:55.700990 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:55 crc kubenswrapper[4745]: I0121 10:40:55.701830 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" podUID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" containerName="controller-manager" containerID="cri-o://1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb" gracePeriod=30 Jan 21 10:40:55 crc kubenswrapper[4745]: I0121 10:40:55.811632 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:55 crc kubenswrapper[4745]: I0121 10:40:55.811968 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerName="route-controller-manager" containerID="cri-o://3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61" gracePeriod=30 Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.022739 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" path="/var/lib/kubelet/pods/9896d393-c134-4abe-ac04-1da7e6ea3aed/volumes" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.024961 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfaacdad-12f1-4904-96db-f24427117da4" path="/var/lib/kubelet/pods/bfaacdad-12f1-4904-96db-f24427117da4/volumes" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.375730 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.431698 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-8wd29"] Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432128 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="extract-content" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432157 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="extract-content" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432175 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432185 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432199 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="extract-content" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432207 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="extract-content" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432220 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" containerName="controller-manager" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432228 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" containerName="controller-manager" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432241 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432254 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432267 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="extract-utilities" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432275 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="extract-utilities" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432287 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26a578ee-30c1-4393-aab3-eb32fdc0a700" containerName="pruner" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432294 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="26a578ee-30c1-4393-aab3-eb32fdc0a700" containerName="pruner" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432308 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="extract-utilities" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432316 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="extract-utilities" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.432326 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432333 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432462 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="58df78fb-8f34-4442-8547-cacf761708dd" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432481 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfaacdad-12f1-4904-96db-f24427117da4" containerName="registry-server" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432491 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9896d393-c134-4abe-ac04-1da7e6ea3aed" containerName="oauth-openshift" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432512 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="26a578ee-30c1-4393-aab3-eb32fdc0a700" containerName="pruner" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.432563 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" containerName="controller-manager" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.433223 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.460786 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.464085 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.464946 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.465352 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.469076 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.469468 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.469688 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.469888 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.470220 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.470343 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.470493 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.470718 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.477776 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.481564 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-8wd29"] Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.484298 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config\") pod \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.484461 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6flsx\" (UniqueName: \"kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx\") pod \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.484553 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert\") pod \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.484590 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles\") pod \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.484627 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca\") pod \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\" (UID: \"5b85ba9f-f076-4546-86d7-1fa02a52e80c\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.485585 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca" (OuterVolumeSpecName: "client-ca") pod "5b85ba9f-f076-4546-86d7-1fa02a52e80c" (UID: "5b85ba9f-f076-4546-86d7-1fa02a52e80c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.485753 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config" (OuterVolumeSpecName: "config") pod "5b85ba9f-f076-4546-86d7-1fa02a52e80c" (UID: "5b85ba9f-f076-4546-86d7-1fa02a52e80c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.486231 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5b85ba9f-f076-4546-86d7-1fa02a52e80c" (UID: "5b85ba9f-f076-4546-86d7-1fa02a52e80c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.488449 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.492971 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5b85ba9f-f076-4546-86d7-1fa02a52e80c" (UID: "5b85ba9f-f076-4546-86d7-1fa02a52e80c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.501557 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx" (OuterVolumeSpecName: "kube-api-access-6flsx") pod "5b85ba9f-f076-4546-86d7-1fa02a52e80c" (UID: "5b85ba9f-f076-4546-86d7-1fa02a52e80c"). InnerVolumeSpecName "kube-api-access-6flsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.508328 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586638 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-audit-policies\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586703 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586753 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586849 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586885 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586913 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586938 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586963 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.586983 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587001 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587037 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587060 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgpf\" (UniqueName: \"kubernetes.io/projected/bba28065-a564-430c-ac41-309ebc0089a3-kube-api-access-pdgpf\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587078 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587101 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bba28065-a564-430c-ac41-309ebc0089a3-audit-dir\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587163 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587177 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587189 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6flsx\" (UniqueName: \"kubernetes.io/projected/5b85ba9f-f076-4546-86d7-1fa02a52e80c-kube-api-access-6flsx\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587201 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5b85ba9f-f076-4546-86d7-1fa02a52e80c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.587212 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5b85ba9f-f076-4546-86d7-1fa02a52e80c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.604826 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.605603 4745 generic.go:334] "Generic (PLEG): container finished" podID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" containerID="1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb" exitCode=0 Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.605770 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.605871 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" event={"ID":"5b85ba9f-f076-4546-86d7-1fa02a52e80c","Type":"ContainerDied","Data":"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb"} Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.605925 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-884444db4-5s4xv" event={"ID":"5b85ba9f-f076-4546-86d7-1fa02a52e80c","Type":"ContainerDied","Data":"9a062694a3e0252553ea9e60c5f32be597edb55e283b02495d7913192f76a3ca"} Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.605958 4745 scope.go:117] "RemoveContainer" containerID="1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.614045 4745 generic.go:334] "Generic (PLEG): container finished" podID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerID="a3b0501f20681f73e27e834fbf44dcde68cee2f446973d728a211dc71d31a0a2" exitCode=0 Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.614442 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerDied","Data":"a3b0501f20681f73e27e834fbf44dcde68cee2f446973d728a211dc71d31a0a2"} Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.617908 4745 generic.go:334] "Generic (PLEG): container finished" podID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerID="3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61" exitCode=0 Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.618129 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" event={"ID":"b7f663f7-35ba-4c54-a326-27891aeb51e4","Type":"ContainerDied","Data":"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61"} Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.618267 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" event={"ID":"b7f663f7-35ba-4c54-a326-27891aeb51e4","Type":"ContainerDied","Data":"017d49dedf9269d11899d68d379f588489a04f44ffcf8edccbb87cecf3e6ad4f"} Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.618095 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.630963 4745 scope.go:117] "RemoveContainer" containerID="1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb" Jan 21 10:40:56 crc kubenswrapper[4745]: E0121 10:40:56.634373 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb\": container with ID starting with 1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb not found: ID does not exist" containerID="1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.634521 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb"} err="failed to get container status \"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb\": rpc error: code = NotFound desc = could not find container \"1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb\": container with ID starting with 1db096b1ec164d026c286270d8768603bace745be20ffa54d046bfa0a4e9bdfb not found: ID does not exist" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.634681 4745 scope.go:117] "RemoveContainer" containerID="3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.671697 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.675613 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-884444db4-5s4xv"] Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.688430 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config\") pod \"b7f663f7-35ba-4c54-a326-27891aeb51e4\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.688839 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert\") pod \"b7f663f7-35ba-4c54-a326-27891aeb51e4\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.688971 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn\") pod \"b7f663f7-35ba-4c54-a326-27891aeb51e4\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689132 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca\") pod \"b7f663f7-35ba-4c54-a326-27891aeb51e4\" (UID: \"b7f663f7-35ba-4c54-a326-27891aeb51e4\") " Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689373 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bba28065-a564-430c-ac41-309ebc0089a3-audit-dir\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689488 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-audit-policies\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689620 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.689906 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690027 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690141 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690247 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690442 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690549 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690694 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690807 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgpf\" (UniqueName: \"kubernetes.io/projected/bba28065-a564-430c-ac41-309ebc0089a3-kube-api-access-pdgpf\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.690915 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.692513 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.693168 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config" (OuterVolumeSpecName: "config") pod "b7f663f7-35ba-4c54-a326-27891aeb51e4" (UID: "b7f663f7-35ba-4c54-a326-27891aeb51e4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.693336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bba28065-a564-430c-ac41-309ebc0089a3-audit-dir\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.693381 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-service-ca\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.693612 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca" (OuterVolumeSpecName: "client-ca") pod "b7f663f7-35ba-4c54-a326-27891aeb51e4" (UID: "b7f663f7-35ba-4c54-a326-27891aeb51e4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.693894 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-audit-policies\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.694399 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-cliconfig\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.696590 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.699314 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-router-certs\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.699708 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-login\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.700030 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-template-error\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.700020 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn" (OuterVolumeSpecName: "kube-api-access-wtnwn") pod "b7f663f7-35ba-4c54-a326-27891aeb51e4" (UID: "b7f663f7-35ba-4c54-a326-27891aeb51e4"). InnerVolumeSpecName "kube-api-access-wtnwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.700309 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-serving-cert\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.701430 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.701751 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.701943 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b7f663f7-35ba-4c54-a326-27891aeb51e4" (UID: "b7f663f7-35ba-4c54-a326-27891aeb51e4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.706702 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bba28065-a564-430c-ac41-309ebc0089a3-v4-0-config-system-session\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.714457 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgpf\" (UniqueName: \"kubernetes.io/projected/bba28065-a564-430c-ac41-309ebc0089a3-kube-api-access-pdgpf\") pod \"oauth-openshift-574b75df8-8wd29\" (UID: \"bba28065-a564-430c-ac41-309ebc0089a3\") " pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.792851 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.793242 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f663f7-35ba-4c54-a326-27891aeb51e4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.793370 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtnwn\" (UniqueName: \"kubernetes.io/projected/b7f663f7-35ba-4c54-a326-27891aeb51e4-kube-api-access-wtnwn\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.793462 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b7f663f7-35ba-4c54-a326-27891aeb51e4-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.834820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.946859 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:56 crc kubenswrapper[4745]: I0121 10:40:56.951870 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.083344 4745 scope.go:117] "RemoveContainer" containerID="3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61" Jan 21 10:40:57 crc kubenswrapper[4745]: E0121 10:40:57.084153 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61\": container with ID starting with 3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61 not found: ID does not exist" containerID="3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.084274 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61"} err="failed to get container status \"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61\": rpc error: code = NotFound desc = could not find container \"3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61\": container with ID starting with 3fc8ed50586cd46ff06f47b62126a4f384df881fe743a1b93711abb74660ba61 not found: ID does not exist" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.425575 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:40:57 crc kubenswrapper[4745]: E0121 10:40:57.426637 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerName="route-controller-manager" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.426717 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerName="route-controller-manager" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.426868 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerName="route-controller-manager" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.428412 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.428834 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.430309 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.437905 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.437933 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.438119 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.438359 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.438423 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.438927 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.438931 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.439414 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.439762 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.439808 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.440160 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.440209 4745 patch_prober.go:28] interesting pod/route-controller-manager-7df6c4f584-jxgrc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" start-of-body= Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.440263 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7df6c4f584-jxgrc" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": dial tcp 10.217.0.56:8443: i/o timeout" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.441054 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.443430 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.446763 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.448655 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.540022 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-574b75df8-8wd29"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.589430 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.608785 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxgk\" (UniqueName: \"kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.608947 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609015 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609037 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609069 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cs9b\" (UniqueName: \"kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609098 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609130 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609171 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.609213 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.639663 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lctvc" event={"ID":"fa834975-c760-4bcb-b0ee-e2f79ade8bd8","Type":"ContainerDied","Data":"89d4117d14a6e9f82cc110855239fde84850aabb4c691cf0e435cb94f84471b8"} Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.639721 4745 scope.go:117] "RemoveContainer" containerID="a3b0501f20681f73e27e834fbf44dcde68cee2f446973d728a211dc71d31a0a2" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.639849 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lctvc" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.644458 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" event={"ID":"bba28065-a564-430c-ac41-309ebc0089a3","Type":"ContainerStarted","Data":"6d5d58e2e5db1a4a80706697d489210b90d3ab4f540b4cb62cbd24ddf9e898ea"} Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.667180 4745 scope.go:117] "RemoveContainer" containerID="437b075e76a5838a8308b2c1fbc45bd893a643a8fde369b90f79871483ece477" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.688425 4745 scope.go:117] "RemoveContainer" containerID="62376f3e2adfde3a28086793e63ec792d924786e8f0c5c649c0915e3672074da" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.710669 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd5b4\" (UniqueName: \"kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4\") pod \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.710752 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities\") pod \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.710968 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content\") pod \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\" (UID: \"fa834975-c760-4bcb-b0ee-e2f79ade8bd8\") " Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711213 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711241 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711267 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cs9b\" (UniqueName: \"kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711321 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711350 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711385 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711420 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxgk\" (UniqueName: \"kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711440 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.711916 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities" (OuterVolumeSpecName: "utilities") pod "fa834975-c760-4bcb-b0ee-e2f79ade8bd8" (UID: "fa834975-c760-4bcb-b0ee-e2f79ade8bd8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.712808 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.713101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.714344 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.714360 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.714685 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.718813 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4" (OuterVolumeSpecName: "kube-api-access-zd5b4") pod "fa834975-c760-4bcb-b0ee-e2f79ade8bd8" (UID: "fa834975-c760-4bcb-b0ee-e2f79ade8bd8"). InnerVolumeSpecName "kube-api-access-zd5b4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.720177 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.727444 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.731167 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cs9b\" (UniqueName: \"kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b\") pod \"route-controller-manager-c8c4b84fc-27fvf\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.741066 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxgk\" (UniqueName: \"kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk\") pod \"controller-manager-56cb99bbcf-rlj4d\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.751493 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fa834975-c760-4bcb-b0ee-e2f79ade8bd8" (UID: "fa834975-c760-4bcb-b0ee-e2f79ade8bd8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.766002 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.777504 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.813681 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd5b4\" (UniqueName: \"kubernetes.io/projected/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-kube-api-access-zd5b4\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.814154 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.814172 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fa834975-c760-4bcb-b0ee-e2f79ade8bd8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.980684 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:40:57 crc kubenswrapper[4745]: I0121 10:40:57.990112 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lctvc"] Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.011938 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b85ba9f-f076-4546-86d7-1fa02a52e80c" path="/var/lib/kubelet/pods/5b85ba9f-f076-4546-86d7-1fa02a52e80c/volumes" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.012820 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7f663f7-35ba-4c54-a326-27891aeb51e4" path="/var/lib/kubelet/pods/b7f663f7-35ba-4c54-a326-27891aeb51e4/volumes" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.013325 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" path="/var/lib/kubelet/pods/fa834975-c760-4bcb-b0ee-e2f79ade8bd8/volumes" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.022193 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.066686 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.135995 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.661007 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" event={"ID":"bba28065-a564-430c-ac41-309ebc0089a3","Type":"ContainerStarted","Data":"15c86341c48573559963274237fa5aec1f5fe3a34eab0bca57f2eb0ab9d3428b"} Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.661992 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.664137 4745 patch_prober.go:28] interesting pod/oauth-openshift-574b75df8-8wd29 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.58:6443/healthz\": dial tcp 10.217.0.58:6443: connect: connection refused" start-of-body= Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.664234 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" podUID="bba28065-a564-430c-ac41-309ebc0089a3" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.58:6443/healthz\": dial tcp 10.217.0.58:6443: connect: connection refused" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.666666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" event={"ID":"abcd5d16-268c-47f2-af61-c06b081b624f","Type":"ContainerStarted","Data":"4630804f52ca775061db6a0d9aa9978dbbbb1aac28c22775faf3f38e9348a55f"} Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.666744 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" event={"ID":"abcd5d16-268c-47f2-af61-c06b081b624f","Type":"ContainerStarted","Data":"3c7d590f088f63323928b443c02023ae162e84934d365258d7335c2ec2046c4b"} Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.667271 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.669990 4745 patch_prober.go:28] interesting pod/controller-manager-56cb99bbcf-rlj4d container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" start-of-body= Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.670074 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": dial tcp 10.217.0.61:8443: connect: connection refused" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.672246 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" event={"ID":"348c6d16-dd12-4eb6-af84-5171192435ae","Type":"ContainerStarted","Data":"cf59ca459e79ee309620ca87dec6bfa1d766d19947de73250b93d23ce3903631"} Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.672323 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.672337 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" event={"ID":"348c6d16-dd12-4eb6-af84-5171192435ae","Type":"ContainerStarted","Data":"fae9e6856c226ce3ba888cd7800300f64ac41032bbb520fda453564aa767aa80"} Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.676588 4745 patch_prober.go:28] interesting pod/route-controller-manager-c8c4b84fc-27fvf container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.676669 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.689127 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" podStartSLOduration=30.689104231 podStartE2EDuration="30.689104231s" podCreationTimestamp="2026-01-21 10:40:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:58.688788899 +0000 UTC m=+243.149576497" watchObservedRunningTime="2026-01-21 10:40:58.689104231 +0000 UTC m=+243.149891829" Jan 21 10:40:58 crc kubenswrapper[4745]: I0121 10:40:58.716125 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" podStartSLOduration=3.716098937 podStartE2EDuration="3.716098937s" podCreationTimestamp="2026-01-21 10:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:58.713196508 +0000 UTC m=+243.173984116" watchObservedRunningTime="2026-01-21 10:40:58.716098937 +0000 UTC m=+243.176886535" Jan 21 10:40:59 crc kubenswrapper[4745]: I0121 10:40:59.690394 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:40:59 crc kubenswrapper[4745]: I0121 10:40:59.690997 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-574b75df8-8wd29" Jan 21 10:40:59 crc kubenswrapper[4745]: I0121 10:40:59.691609 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:40:59 crc kubenswrapper[4745]: I0121 10:40:59.712258 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" podStartSLOduration=4.712227408 podStartE2EDuration="4.712227408s" podCreationTimestamp="2026-01-21 10:40:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:40:58.746242942 +0000 UTC m=+243.207030540" watchObservedRunningTime="2026-01-21 10:40:59.712227408 +0000 UTC m=+244.173015006" Jan 21 10:41:01 crc kubenswrapper[4745]: I0121 10:41:01.174210 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:01 crc kubenswrapper[4745]: I0121 10:41:01.174292 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:01 crc kubenswrapper[4745]: I0121 10:41:01.214211 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:01 crc kubenswrapper[4745]: I0121 10:41:01.743035 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.441912 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-fn62p"] Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.442943 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-fn62p" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="registry-server" containerID="cri-o://f49e3a4d6172f3f5d53c6744942d36dd66c7439cd0986f940c625c6fdd8152eb" gracePeriod=2 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.717378 4745 generic.go:334] "Generic (PLEG): container finished" podID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerID="f49e3a4d6172f3f5d53c6744942d36dd66c7439cd0986f940c625c6fdd8152eb" exitCode=0 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.717439 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerDied","Data":"f49e3a4d6172f3f5d53c6744942d36dd66c7439cd0986f940c625c6fdd8152eb"} Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.756060 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.756455 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="extract-content" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.756471 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="extract-content" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.756505 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="extract-utilities" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.756514 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="extract-utilities" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.756540 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="registry-server" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.756548 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="registry-server" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.756669 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa834975-c760-4bcb-b0ee-e2f79ade8bd8" containerName="registry-server" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757216 4745 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757587 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646" gracePeriod=15 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757836 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e" gracePeriod=15 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757884 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885" gracePeriod=15 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757928 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e" gracePeriod=15 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757946 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.757972 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff" gracePeriod=15 Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.758478 4745 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.761896 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.761932 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.761950 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.761959 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.761971 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.761979 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.761988 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.761995 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.762009 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762016 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 21 10:41:04 crc kubenswrapper[4745]: E0121 10:41:04.762026 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762034 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762192 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762207 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762216 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762230 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.762244 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.768672 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.919644 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920180 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920222 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920244 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920269 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:04 crc kubenswrapper[4745]: I0121 10:41:04.920305 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.002075 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.003383 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022259 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022332 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022353 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022380 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022450 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022472 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022468 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022552 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022487 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022607 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022636 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022659 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022698 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022661 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022775 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.022807 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.123910 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrwxr\" (UniqueName: \"kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr\") pod \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.124020 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities\") pod \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.124050 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content\") pod \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\" (UID: \"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28\") " Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.125832 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities" (OuterVolumeSpecName: "utilities") pod "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" (UID: "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.132264 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr" (OuterVolumeSpecName: "kube-api-access-hrwxr") pod "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" (UID: "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28"). InnerVolumeSpecName "kube-api-access-hrwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.225622 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hrwxr\" (UniqueName: \"kubernetes.io/projected/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-kube-api-access-hrwxr\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.225933 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.251993 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" (UID: "84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.327261 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.726933 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.728257 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e" exitCode=0 Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.728289 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885" exitCode=0 Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.728299 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e" exitCode=0 Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.728305 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff" exitCode=2 Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.730672 4745 generic.go:334] "Generic (PLEG): container finished" podID="04bbf215-722d-4e3d-bc35-99fd1f673a02" containerID="554d4e7d11b67e7c5320d439bba4063afc80aa6782fd46c36a5e506f8332dbf0" exitCode=0 Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.730745 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"04bbf215-722d-4e3d-bc35-99fd1f673a02","Type":"ContainerDied","Data":"554d4e7d11b67e7c5320d439bba4063afc80aa6782fd46c36a5e506f8332dbf0"} Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.731519 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.731951 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.733669 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-fn62p" event={"ID":"84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28","Type":"ContainerDied","Data":"2bd2319c39d0f933d64027ab90bea8ada9c5595c010c7f192397f2e7c1c05f11"} Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.733692 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-fn62p" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.733881 4745 scope.go:117] "RemoveContainer" containerID="f49e3a4d6172f3f5d53c6744942d36dd66c7439cd0986f940c625c6fdd8152eb" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.734674 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.735203 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.752487 4745 scope.go:117] "RemoveContainer" containerID="b9495423924e84574bcb461777225dcd4c4051d88c357c75687e746daac2df81" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.754196 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.754646 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:05 crc kubenswrapper[4745]: I0121 10:41:05.788557 4745 scope.go:117] "RemoveContainer" containerID="9d6642a8dfcd3b69281c538b151667b9f17f1809db62befb9e347a554708cfa5" Jan 21 10:41:06 crc kubenswrapper[4745]: I0121 10:41:06.003665 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: I0121 10:41:06.004580 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.947244 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.948141 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.948359 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.948578 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.948995 4745 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:06 crc kubenswrapper[4745]: I0121 10:41:06.949137 4745 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 21 10:41:06 crc kubenswrapper[4745]: E0121 10:41:06.949575 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="200ms" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.150937 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="400ms" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.345279 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.346429 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.346843 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.356644 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.357582 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.358246 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.358510 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.358818 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474329 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock\") pod \"04bbf215-722d-4e3d-bc35-99fd1f673a02\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474449 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock" (OuterVolumeSpecName: "var-lock") pod "04bbf215-722d-4e3d-bc35-99fd1f673a02" (UID: "04bbf215-722d-4e3d-bc35-99fd1f673a02"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474515 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access\") pod \"04bbf215-722d-4e3d-bc35-99fd1f673a02\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474701 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir\") pod \"04bbf215-722d-4e3d-bc35-99fd1f673a02\" (UID: \"04bbf215-722d-4e3d-bc35-99fd1f673a02\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474750 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474795 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474786 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "04bbf215-722d-4e3d-bc35-99fd1f673a02" (UID: "04bbf215-722d-4e3d-bc35-99fd1f673a02"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474828 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474873 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.474937 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475052 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475570 4745 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475597 4745 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04bbf215-722d-4e3d-bc35-99fd1f673a02-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475612 4745 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475629 4745 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.475650 4745 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.482744 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "04bbf215-722d-4e3d-bc35-99fd1f673a02" (UID: "04bbf215-722d-4e3d-bc35-99fd1f673a02"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.552789 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="800ms" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.576931 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/04bbf215-722d-4e3d-bc35-99fd1f673a02-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.755287 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.756413 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646" exitCode=0 Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.756599 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.756645 4745 scope.go:117] "RemoveContainer" containerID="d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.759074 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"04bbf215-722d-4e3d-bc35-99fd1f673a02","Type":"ContainerDied","Data":"9ab2ea856d67f260892a3459129a338f7ba50193ff33e4e52884553410cd300a"} Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.759115 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab2ea856d67f260892a3459129a338f7ba50193ff33e4e52884553410cd300a" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.759173 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.777894 4745 scope.go:117] "RemoveContainer" containerID="338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.783282 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.783653 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.783842 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.786979 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.787215 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.787368 4745 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.795369 4745 scope.go:117] "RemoveContainer" containerID="814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.811047 4745 scope.go:117] "RemoveContainer" containerID="0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.829124 4745 scope.go:117] "RemoveContainer" containerID="19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.852140 4745 scope.go:117] "RemoveContainer" containerID="f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.889325 4745 scope.go:117] "RemoveContainer" containerID="d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.889889 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\": container with ID starting with d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e not found: ID does not exist" containerID="d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.889928 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e"} err="failed to get container status \"d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\": rpc error: code = NotFound desc = could not find container \"d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e\": container with ID starting with d07123b8b12ecec8a008bb279519ae7d9537a5531e9c2bff7b3c90f54e00634e not found: ID does not exist" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.889953 4745 scope.go:117] "RemoveContainer" containerID="338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.890325 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\": container with ID starting with 338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885 not found: ID does not exist" containerID="338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.890354 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885"} err="failed to get container status \"338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\": rpc error: code = NotFound desc = could not find container \"338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885\": container with ID starting with 338ab25508c8dc7608fab016fd35315355e3a24f0d942849229e0a58702ea885 not found: ID does not exist" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.890373 4745 scope.go:117] "RemoveContainer" containerID="814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.890759 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\": container with ID starting with 814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e not found: ID does not exist" containerID="814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.890813 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e"} err="failed to get container status \"814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\": rpc error: code = NotFound desc = could not find container \"814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e\": container with ID starting with 814a0d1d27a4b576fabddf87c059305e7f994debb4ffdbbf20f53d2e3d9bb33e not found: ID does not exist" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.890863 4745 scope.go:117] "RemoveContainer" containerID="0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.891255 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\": container with ID starting with 0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff not found: ID does not exist" containerID="0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.891290 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff"} err="failed to get container status \"0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\": rpc error: code = NotFound desc = could not find container \"0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff\": container with ID starting with 0870a6bcabc228d76d4484199f5ad2071c393fff0f092288838f573b655727ff not found: ID does not exist" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.891313 4745 scope.go:117] "RemoveContainer" containerID="19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.891905 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\": container with ID starting with 19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646 not found: ID does not exist" containerID="19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.891994 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646"} err="failed to get container status \"19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\": rpc error: code = NotFound desc = could not find container \"19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646\": container with ID starting with 19a393a96c6ca34a1271d312d1b1196d61107e0157bcee3fc1c5d1c3ca79f646 not found: ID does not exist" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.892039 4745 scope.go:117] "RemoveContainer" containerID="f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5" Jan 21 10:41:07 crc kubenswrapper[4745]: E0121 10:41:07.892450 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\": container with ID starting with f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5 not found: ID does not exist" containerID="f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5" Jan 21 10:41:07 crc kubenswrapper[4745]: I0121 10:41:07.892480 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5"} err="failed to get container status \"f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\": rpc error: code = NotFound desc = could not find container \"f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5\": container with ID starting with f9574d02ca75adba64c0c794b18e12ad830b4922b70db8a89ca93e724b7b1da5 not found: ID does not exist" Jan 21 10:41:08 crc kubenswrapper[4745]: I0121 10:41:08.009017 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 21 10:41:08 crc kubenswrapper[4745]: E0121 10:41:08.353788 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="1.6s" Jan 21 10:41:09 crc kubenswrapper[4745]: E0121 10:41:09.808839 4745 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.78:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:09 crc kubenswrapper[4745]: I0121 10:41:09.809921 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:09 crc kubenswrapper[4745]: W0121 10:41:09.854009 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-f4abd48eafe6311c1587098a770cfd41414d80e825d7f5096db95660c299c2b4 WatchSource:0}: Error finding container f4abd48eafe6311c1587098a770cfd41414d80e825d7f5096db95660c299c2b4: Status 404 returned error can't find the container with id f4abd48eafe6311c1587098a770cfd41414d80e825d7f5096db95660c299c2b4 Jan 21 10:41:09 crc kubenswrapper[4745]: E0121 10:41:09.857869 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.78:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb8ea3e49f38d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:41:09.856654221 +0000 UTC m=+254.317441829,LastTimestamp:2026-01-21 10:41:09.856654221 +0000 UTC m=+254.317441829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:41:09 crc kubenswrapper[4745]: E0121 10:41:09.954647 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="3.2s" Jan 21 10:41:10 crc kubenswrapper[4745]: I0121 10:41:10.779337 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69"} Jan 21 10:41:10 crc kubenswrapper[4745]: I0121 10:41:10.779777 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"f4abd48eafe6311c1587098a770cfd41414d80e825d7f5096db95660c299c2b4"} Jan 21 10:41:10 crc kubenswrapper[4745]: E0121 10:41:10.780353 4745 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.78:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:10 crc kubenswrapper[4745]: I0121 10:41:10.780816 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:10 crc kubenswrapper[4745]: I0121 10:41:10.781144 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:12 crc kubenswrapper[4745]: E0121 10:41:12.471367 4745 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.78:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188cb8ea3e49f38d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-21 10:41:09.856654221 +0000 UTC m=+254.317441829,LastTimestamp:2026-01-21 10:41:09.856654221 +0000 UTC m=+254.317441829,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 21 10:41:13 crc kubenswrapper[4745]: E0121 10:41:13.156384 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="6.4s" Jan 21 10:41:14 crc kubenswrapper[4745]: E0121 10:41:14.080061 4745 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.129.56.78:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" volumeName="registry-storage" Jan 21 10:41:16 crc kubenswrapper[4745]: I0121 10:41:16.003005 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:16 crc kubenswrapper[4745]: I0121 10:41:16.003919 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.833854 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.834305 4745 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a" exitCode=1 Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.834351 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a"} Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.835306 4745 scope.go:117] "RemoveContainer" containerID="8fd4e48027d6104298fc4552f608dd603a4faeccc3808f8f58348cfdcf4e7d3a" Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.836291 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.836785 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:18 crc kubenswrapper[4745]: I0121 10:41:18.837955 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:18.999835 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.001755 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.002245 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.002709 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.026439 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.026466 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:19 crc kubenswrapper[4745]: E0121 10:41:19.026927 4745 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.027565 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:19 crc kubenswrapper[4745]: W0121 10:41:19.048811 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-b6cac1c75516e837153964d7817dba32999d6bb9f9d5a25bcd8d6b750d0c606b WatchSource:0}: Error finding container b6cac1c75516e837153964d7817dba32999d6bb9f9d5a25bcd8d6b750d0c606b: Status 404 returned error can't find the container with id b6cac1c75516e837153964d7817dba32999d6bb9f9d5a25bcd8d6b750d0c606b Jan 21 10:41:19 crc kubenswrapper[4745]: E0121 10:41:19.557729 4745 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.78:6443: connect: connection refused" interval="7s" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.845499 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.845621 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d341421be138e4b3f8246c0d66a413cbf545ab4c1cf8789f3e44431daf909ff8"} Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.846732 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.846997 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.847261 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.847945 4745 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="27d46b8e9ca62837919c836434455b020979383c33dda1c7d230f1863a0a6396" exitCode=0 Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.847984 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"27d46b8e9ca62837919c836434455b020979383c33dda1c7d230f1863a0a6396"} Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.848030 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b6cac1c75516e837153964d7817dba32999d6bb9f9d5a25bcd8d6b750d0c606b"} Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.848326 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.848348 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.848977 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: E0121 10:41:19.849006 4745 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.849394 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.849959 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:19 crc kubenswrapper[4745]: I0121 10:41:19.896723 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.200722 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.205084 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.205940 4745 status_manager.go:851] "Failed to get status for pod" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" pod="openshift-marketplace/redhat-operators-fn62p" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-fn62p\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.206383 4745 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.206806 4745 status_manager.go:851] "Failed to get status for pod" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.78:6443: connect: connection refused" Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.859492 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"827cb9f1c002ab2da1611394c860e744ed68035ccf35aa922b2f234c68720701"} Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.859590 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"505eff87f9c03c48e3f2322bc36a3e002493a9dd4df9b2fa800ab9dc499bc2fc"} Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.859611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"05b124d56d6209d25de0b0f8ebd2e70e440d61ddd83fe1c053c0c645672c4233"} Jan 21 10:41:20 crc kubenswrapper[4745]: I0121 10:41:20.859631 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f00fd559ef486eacbab3df1d470dd07f86d32acbe509343409f8e1ba1e4cd579"} Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.907358 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5efae7afedbda7ec2ff42851c153d322b532863fd3dc8ebfb90088733359bb93"} Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.907866 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.907726 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.907888 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.913944 4745 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:26 crc kubenswrapper[4745]: I0121 10:41:26.916828 4745 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d268138a-75e7-4ae9-ac80-560aef3f4ea4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f00fd559ef486eacbab3df1d470dd07f86d32acbe509343409f8e1ba1e4cd579\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://505eff87f9c03c48e3f2322bc36a3e002493a9dd4df9b2fa800ab9dc499bc2fc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05b124d56d6209d25de0b0f8ebd2e70e440d61ddd83fe1c053c0c645672c4233\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:41:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5efae7afedbda7ec2ff42851c153d322b532863fd3dc8ebfb90088733359bb93\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:41:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://827cb9f1c002ab2da1611394c860e744ed68035ccf35aa922b2f234c68720701\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-21T10:41:20Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 21 10:41:27 crc kubenswrapper[4745]: I0121 10:41:27.913172 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:27 crc kubenswrapper[4745]: I0121 10:41:27.913212 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.028782 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.028851 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.029333 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.029358 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.033408 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.035977 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6b1f9771-aea5-4db8-8938-4d8f12dbb8b2" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.901808 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.935441 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:29 crc kubenswrapper[4745]: I0121 10:41:29.935485 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:34 crc kubenswrapper[4745]: I0121 10:41:34.033169 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:34 crc kubenswrapper[4745]: I0121 10:41:34.034136 4745 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:34 crc kubenswrapper[4745]: I0121 10:41:34.034160 4745 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="d268138a-75e7-4ae9-ac80-560aef3f4ea4" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.018241 4745 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="6b1f9771-aea5-4db8-8938-4d8f12dbb8b2" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.041189 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.460349 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.579626 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.784980 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 21 10:41:36 crc kubenswrapper[4745]: I0121 10:41:36.963873 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 21 10:41:37 crc kubenswrapper[4745]: I0121 10:41:37.591837 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 21 10:41:38 crc kubenswrapper[4745]: I0121 10:41:38.143364 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 21 10:41:38 crc kubenswrapper[4745]: I0121 10:41:38.222827 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:41:38 crc kubenswrapper[4745]: I0121 10:41:38.483447 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 21 10:41:38 crc kubenswrapper[4745]: I0121 10:41:38.723407 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.076743 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.085064 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.239444 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.314135 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.382780 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.464982 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.521979 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.618251 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.635609 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.697981 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.769105 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.841055 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.866194 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 21 10:41:39 crc kubenswrapper[4745]: I0121 10:41:39.929720 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.065166 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.173553 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.206843 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.210328 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.287951 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.289005 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.293191 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.333470 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.374300 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.437922 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.438236 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.458438 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.604105 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.838774 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.844870 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.869306 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.875622 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.884274 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.960283 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.979057 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 21 10:41:40 crc kubenswrapper[4745]: I0121 10:41:40.994398 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.018493 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.024962 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.090592 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.191767 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.267782 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.339674 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.345579 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.417614 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.489663 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.572728 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.580591 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.590654 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.612354 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.628674 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.688920 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.761913 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.765434 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.888078 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 21 10:41:41 crc kubenswrapper[4745]: I0121 10:41:41.916980 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.015641 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.069093 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.171600 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.258969 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.270174 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.348707 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.363414 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.447627 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.511605 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.524372 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 21 10:41:42 crc kubenswrapper[4745]: I0121 10:41:42.626170 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.044565 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.052112 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.114619 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.118887 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.132626 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.145279 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.148657 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.168305 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.206901 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.301041 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.315769 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.399956 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.468373 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.473499 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.515318 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.546584 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.603196 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.729284 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.736743 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.743275 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.759242 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.787624 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.888456 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.937863 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.943377 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.978900 4745 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 21 10:41:43 crc kubenswrapper[4745]: I0121 10:41:43.994974 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.013706 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.043351 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.044320 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.219375 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.268338 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.272399 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.325059 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.424315 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.505241 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.506629 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.684946 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.712472 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.739760 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.802661 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.866871 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 21 10:41:44 crc kubenswrapper[4745]: I0121 10:41:44.994244 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.051561 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.070038 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.098194 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.112516 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.175407 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.322505 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.323114 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.451274 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.517783 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.526062 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.530970 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.611290 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.675113 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.748442 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.872670 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 21 10:41:45 crc kubenswrapper[4745]: I0121 10:41:45.898148 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.035296 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.047229 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.048173 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.072861 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.236046 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.255369 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.300885 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.317844 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.396657 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.401768 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.415846 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.419851 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.484361 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.569944 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.572672 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.612498 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.622079 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.624948 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.662478 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.694042 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.787512 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.892635 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 21 10:41:46 crc kubenswrapper[4745]: I0121 10:41:46.947357 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.026013 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.168008 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.174993 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.181890 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.198852 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.352831 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.355969 4745 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.358114 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.370178 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.406401 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.458520 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.467833 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.555135 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.598080 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.639847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.789999 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.844413 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 21 10:41:47 crc kubenswrapper[4745]: I0121 10:41:47.887984 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.061749 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.069277 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.082026 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.160224 4745 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.165752 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-fn62p","openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.165844 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.171291 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.188021 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.188000586 podStartE2EDuration="22.188000586s" podCreationTimestamp="2026-01-21 10:41:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:41:48.187996885 +0000 UTC m=+292.648784483" watchObservedRunningTime="2026-01-21 10:41:48.188000586 +0000 UTC m=+292.648788184" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.245828 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.269193 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.409219 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.413826 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.490636 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.622409 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.627166 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.638436 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.713312 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.734143 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 21 10:41:48 crc kubenswrapper[4745]: I0121 10:41:48.792559 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.077490 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.083067 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.127801 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.138010 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.205507 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.278848 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.330505 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.347876 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.415221 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.458630 4745 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.459006 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69" gracePeriod=5 Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.488124 4745 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.543642 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.579485 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.669921 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.799400 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.877144 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.887933 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.909092 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.937584 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.951545 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.954980 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 21 10:41:49 crc kubenswrapper[4745]: I0121 10:41:49.956519 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.006349 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.013171 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" path="/var/lib/kubelet/pods/84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28/volumes" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.054927 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.056708 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.121953 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.128351 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.218067 4745 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.396455 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.417484 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.426196 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.571944 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.636811 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.642962 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.658024 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.708612 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.730984 4745 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.809238 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.883149 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.907783 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 21 10:41:50 crc kubenswrapper[4745]: I0121 10:41:50.980140 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.061705 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.083753 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.139490 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.163776 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.164857 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.445166 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.477357 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.608855 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 21 10:41:51 crc kubenswrapper[4745]: I0121 10:41:51.994664 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 21 10:41:52 crc kubenswrapper[4745]: I0121 10:41:52.341095 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 21 10:41:52 crc kubenswrapper[4745]: I0121 10:41:52.431495 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 21 10:41:52 crc kubenswrapper[4745]: I0121 10:41:52.575276 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:41:52 crc kubenswrapper[4745]: I0121 10:41:52.626357 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 21 10:41:53 crc kubenswrapper[4745]: I0121 10:41:53.016387 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 21 10:41:53 crc kubenswrapper[4745]: I0121 10:41:53.209910 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 21 10:41:53 crc kubenswrapper[4745]: I0121 10:41:53.277722 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 21 10:41:53 crc kubenswrapper[4745]: I0121 10:41:53.491119 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 21 10:41:53 crc kubenswrapper[4745]: I0121 10:41:53.512917 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.011775 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.261218 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.588199 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.588292 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776010 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776223 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776293 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776331 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776368 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776375 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776448 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776471 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776502 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776883 4745 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776906 4745 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776919 4745 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.776931 4745 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.787923 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.877773 4745 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:54 crc kubenswrapper[4745]: I0121 10:41:54.995061 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.102399 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.102457 4745 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69" exitCode=137 Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.102518 4745 scope.go:117] "RemoveContainer" containerID="2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.102692 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.125332 4745 scope.go:117] "RemoveContainer" containerID="2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69" Jan 21 10:41:55 crc kubenswrapper[4745]: E0121 10:41:55.125908 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69\": container with ID starting with 2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69 not found: ID does not exist" containerID="2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.125949 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69"} err="failed to get container status \"2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69\": rpc error: code = NotFound desc = could not find container \"2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69\": container with ID starting with 2ac855d68c69094f3009257fbf5b9ce0e3030eded01a9ea010e433c471635e69 not found: ID does not exist" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.247413 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.779658 4745 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.830925 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.831264 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" containerName="route-controller-manager" containerID="cri-o://cf59ca459e79ee309620ca87dec6bfa1d766d19947de73250b93d23ce3903631" gracePeriod=30 Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.834053 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:41:55 crc kubenswrapper[4745]: I0121 10:41:55.834312 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" containerName="controller-manager" containerID="cri-o://4630804f52ca775061db6a0d9aa9978dbbbb1aac28c22775faf3f38e9348a55f" gracePeriod=30 Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.022396 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.115999 4745 generic.go:334] "Generic (PLEG): container finished" podID="348c6d16-dd12-4eb6-af84-5171192435ae" containerID="cf59ca459e79ee309620ca87dec6bfa1d766d19947de73250b93d23ce3903631" exitCode=0 Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.116070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" event={"ID":"348c6d16-dd12-4eb6-af84-5171192435ae","Type":"ContainerDied","Data":"cf59ca459e79ee309620ca87dec6bfa1d766d19947de73250b93d23ce3903631"} Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.123077 4745 generic.go:334] "Generic (PLEG): container finished" podID="abcd5d16-268c-47f2-af61-c06b081b624f" containerID="4630804f52ca775061db6a0d9aa9978dbbbb1aac28c22775faf3f38e9348a55f" exitCode=0 Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.123401 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" event={"ID":"abcd5d16-268c-47f2-af61-c06b081b624f","Type":"ContainerDied","Data":"4630804f52ca775061db6a0d9aa9978dbbbb1aac28c22775faf3f38e9348a55f"} Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.441404 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.504839 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540443 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config\") pod \"abcd5d16-268c-47f2-af61-c06b081b624f\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540500 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config\") pod \"348c6d16-dd12-4eb6-af84-5171192435ae\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540613 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert\") pod \"348c6d16-dd12-4eb6-af84-5171192435ae\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540655 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cs9b\" (UniqueName: \"kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b\") pod \"348c6d16-dd12-4eb6-af84-5171192435ae\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca\") pod \"abcd5d16-268c-47f2-af61-c06b081b624f\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540738 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca\") pod \"348c6d16-dd12-4eb6-af84-5171192435ae\" (UID: \"348c6d16-dd12-4eb6-af84-5171192435ae\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540784 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsxgk\" (UniqueName: \"kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk\") pod \"abcd5d16-268c-47f2-af61-c06b081b624f\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540825 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert\") pod \"abcd5d16-268c-47f2-af61-c06b081b624f\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.540856 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles\") pod \"abcd5d16-268c-47f2-af61-c06b081b624f\" (UID: \"abcd5d16-268c-47f2-af61-c06b081b624f\") " Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.541869 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config" (OuterVolumeSpecName: "config") pod "348c6d16-dd12-4eb6-af84-5171192435ae" (UID: "348c6d16-dd12-4eb6-af84-5171192435ae"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.542624 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca" (OuterVolumeSpecName: "client-ca") pod "abcd5d16-268c-47f2-af61-c06b081b624f" (UID: "abcd5d16-268c-47f2-af61-c06b081b624f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.542842 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config" (OuterVolumeSpecName: "config") pod "abcd5d16-268c-47f2-af61-c06b081b624f" (UID: "abcd5d16-268c-47f2-af61-c06b081b624f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.543197 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca" (OuterVolumeSpecName: "client-ca") pod "348c6d16-dd12-4eb6-af84-5171192435ae" (UID: "348c6d16-dd12-4eb6-af84-5171192435ae"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.543513 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "abcd5d16-268c-47f2-af61-c06b081b624f" (UID: "abcd5d16-268c-47f2-af61-c06b081b624f"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.548150 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b" (OuterVolumeSpecName: "kube-api-access-5cs9b") pod "348c6d16-dd12-4eb6-af84-5171192435ae" (UID: "348c6d16-dd12-4eb6-af84-5171192435ae"). InnerVolumeSpecName "kube-api-access-5cs9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.548858 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "abcd5d16-268c-47f2-af61-c06b081b624f" (UID: "abcd5d16-268c-47f2-af61-c06b081b624f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.552876 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk" (OuterVolumeSpecName: "kube-api-access-wsxgk") pod "abcd5d16-268c-47f2-af61-c06b081b624f" (UID: "abcd5d16-268c-47f2-af61-c06b081b624f"). InnerVolumeSpecName "kube-api-access-wsxgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.557462 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "348c6d16-dd12-4eb6-af84-5171192435ae" (UID: "348c6d16-dd12-4eb6-af84-5171192435ae"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642863 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642918 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642931 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642940 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348c6d16-dd12-4eb6-af84-5171192435ae-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642953 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cs9b\" (UniqueName: \"kubernetes.io/projected/348c6d16-dd12-4eb6-af84-5171192435ae-kube-api-access-5cs9b\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642969 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/abcd5d16-268c-47f2-af61-c06b081b624f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642980 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/348c6d16-dd12-4eb6-af84-5171192435ae-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642991 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsxgk\" (UniqueName: \"kubernetes.io/projected/abcd5d16-268c-47f2-af61-c06b081b624f-kube-api-access-wsxgk\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:56 crc kubenswrapper[4745]: I0121 10:41:56.642999 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abcd5d16-268c-47f2-af61-c06b081b624f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.136869 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" event={"ID":"abcd5d16-268c-47f2-af61-c06b081b624f","Type":"ContainerDied","Data":"3c7d590f088f63323928b443c02023ae162e84934d365258d7335c2ec2046c4b"} Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.138679 4745 scope.go:117] "RemoveContainer" containerID="4630804f52ca775061db6a0d9aa9978dbbbb1aac28c22775faf3f38e9348a55f" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.136899 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.139867 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" event={"ID":"348c6d16-dd12-4eb6-af84-5171192435ae","Type":"ContainerDied","Data":"fae9e6856c226ce3ba888cd7800300f64ac41032bbb520fda453564aa767aa80"} Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.139947 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.161033 4745 scope.go:117] "RemoveContainer" containerID="cf59ca459e79ee309620ca87dec6bfa1d766d19947de73250b93d23ce3903631" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.175267 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.191462 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c8c4b84fc-27fvf"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.197155 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.202894 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56cb99bbcf-rlj4d"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.475737 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-sffxv"] Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.476754 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="registry-server" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.476882 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="registry-server" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.477000 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" containerName="controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.477102 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" containerName="controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.477206 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.477329 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.477431 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" containerName="installer" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.477632 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" containerName="installer" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.477762 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="extract-content" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.477875 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="extract-content" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.477956 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" containerName="route-controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478089 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" containerName="route-controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: E0121 10:41:57.478191 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="extract-utilities" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478304 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="extract-utilities" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478551 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04bbf215-722d-4e3d-bc35-99fd1f673a02" containerName="installer" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478687 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478772 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="84794ebf-6b4d-4f16-ab1c-b5bbf5c02e28" containerName="registry-server" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478864 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" containerName="route-controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.478963 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" containerName="controller-manager" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.479740 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.480577 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.481498 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.489413 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.489503 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.491358 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.494416 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.494517 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.494847 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.494856 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.494868 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.495356 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.495400 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.503377 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.503376 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.505909 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.511188 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-sffxv"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.516375 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br"] Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.556299 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.556749 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.556939 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557070 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557214 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557362 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxdmc\" (UniqueName: \"kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557593 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557698 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgvhp\" (UniqueName: \"kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.557758 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659363 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659445 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659474 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659507 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659549 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxdmc\" (UniqueName: \"kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659575 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659595 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgvhp\" (UniqueName: \"kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659613 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.659645 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.660872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.661589 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.661715 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.661833 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.662564 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.670289 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.670822 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.685431 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgvhp\" (UniqueName: \"kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp\") pod \"controller-manager-7f967c9988-sffxv\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.693027 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxdmc\" (UniqueName: \"kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc\") pod \"route-controller-manager-8f6c6688d-jm9br\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.807576 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:41:57 crc kubenswrapper[4745]: I0121 10:41:57.817785 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:41:58 crc kubenswrapper[4745]: I0121 10:41:58.009021 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348c6d16-dd12-4eb6-af84-5171192435ae" path="/var/lib/kubelet/pods/348c6d16-dd12-4eb6-af84-5171192435ae/volumes" Jan 21 10:41:58 crc kubenswrapper[4745]: I0121 10:41:58.009747 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abcd5d16-268c-47f2-af61-c06b081b624f" path="/var/lib/kubelet/pods/abcd5d16-268c-47f2-af61-c06b081b624f/volumes" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.752595 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649" Netns:"/var/run/netns/74e640b6-a023-4b1b-9227-46c87074f7c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.753092 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649" Netns:"/var/run/netns/74e640b6-a023-4b1b-9227-46c87074f7c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.753124 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649" Netns:"/var/run/netns/74e640b6-a023-4b1b-9227-46c87074f7c6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.753268 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager(ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager(ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649\\\" Netns:\\\"/var/run/netns/74e640b6-a023-4b1b-9227-46c87074f7c6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=51b463e8072d44f449eabb6dfd8771a5fbc5a75014cb95fe9623ba1b18fd2649;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod \\\"route-controller-manager-8f6c6688d-jm9br\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" podUID="ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.860500 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000" Netns:"/var/run/netns/2c67c423-b19a-49d7-a90d-0ae80e634257" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.860660 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000" Netns:"/var/run/netns/2c67c423-b19a-49d7-a90d-0ae80e634257" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.860687 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 21 10:42:00 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000" Netns:"/var/run/netns/2c67c423-b19a-49d7-a90d-0ae80e634257" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:00 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:00 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:00 crc kubenswrapper[4745]: E0121 10:42:00.860792 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f967c9988-sffxv_openshift-controller-manager(f218df5a-ab7b-492e-aec3-64567013e2d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f967c9988-sffxv_openshift-controller-manager(f218df5a-ab7b-492e-aec3-64567013e2d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000\\\" Netns:\\\"/var/run/netns/2c67c423-b19a-49d7-a90d-0ae80e634257\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2e3eb08b32ebd7fd1835901051f9f3f6db9aa13098af6b82a63e02e4049ab000;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod \\\"controller-manager-7f967c9988-sffxv\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" podUID="f218df5a-ab7b-492e-aec3-64567013e2d2" Jan 21 10:42:01 crc kubenswrapper[4745]: I0121 10:42:01.166286 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:01 crc kubenswrapper[4745]: I0121 10:42:01.166364 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:01 crc kubenswrapper[4745]: I0121 10:42:01.166893 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:01 crc kubenswrapper[4745]: I0121 10:42:01.166899 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.311203 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780" Netns:"/var/run/netns/5f973bb6-01fe-436b-8bba-562e15580d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.311854 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780" Netns:"/var/run/netns/5f973bb6-01fe-436b-8bba-562e15580d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.311877 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780" Netns:"/var/run/netns/5f973bb6-01fe-436b-8bba-562e15580d60" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.311973 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager(ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager(ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780\\\" Netns:\\\"/var/run/netns/5f973bb6-01fe-436b-8bba-562e15580d60\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=33fbe1d793e5f24d6e7befb9a4d8ad55a3a7c262f3343a8279bcac5d67304780;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod \\\"route-controller-manager-8f6c6688d-jm9br\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" podUID="ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.361808 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2" Netns:"/var/run/netns/e188fc56-7f67-419a-91a1-6cab08f4cdc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.361928 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2" Netns:"/var/run/netns/e188fc56-7f67-419a-91a1-6cab08f4cdc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.362028 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 21 10:42:04 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2" Netns:"/var/run/netns/e188fc56-7f67-419a-91a1-6cab08f4cdc6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod "controller-manager-7f967c9988-sffxv" not found Jan 21 10:42:04 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:04 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:04 crc kubenswrapper[4745]: E0121 10:42:04.362149 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f967c9988-sffxv_openshift-controller-manager(f218df5a-ab7b-492e-aec3-64567013e2d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f967c9988-sffxv_openshift-controller-manager(f218df5a-ab7b-492e-aec3-64567013e2d2)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f967c9988-sffxv_openshift-controller-manager_f218df5a-ab7b-492e-aec3-64567013e2d2_0(2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2): error adding pod openshift-controller-manager_controller-manager-7f967c9988-sffxv to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2\\\" Netns:\\\"/var/run/netns/e188fc56-7f67-419a-91a1-6cab08f4cdc6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f967c9988-sffxv;K8S_POD_INFRA_CONTAINER_ID=2d1a8a4ca19b0b2bfe78b3610f17342d66f5920447d4c42b4a6fb36d34189fe2;K8S_POD_UID=f218df5a-ab7b-492e-aec3-64567013e2d2\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f967c9988-sffxv] networking: Multus: [openshift-controller-manager/controller-manager-7f967c9988-sffxv/f218df5a-ab7b-492e-aec3-64567013e2d2]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-7f967c9988-sffxv in out of cluster comm: pod \\\"controller-manager-7f967c9988-sffxv\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" podUID="f218df5a-ab7b-492e-aec3-64567013e2d2" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.000223 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.001897 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.616144 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-sffxv"] Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.617208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.632693 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.641195 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br"] Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgvhp\" (UniqueName: \"kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp\") pod \"f218df5a-ab7b-492e-aec3-64567013e2d2\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763514 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles\") pod \"f218df5a-ab7b-492e-aec3-64567013e2d2\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763652 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca\") pod \"f218df5a-ab7b-492e-aec3-64567013e2d2\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763709 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert\") pod \"f218df5a-ab7b-492e-aec3-64567013e2d2\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763753 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config\") pod \"f218df5a-ab7b-492e-aec3-64567013e2d2\" (UID: \"f218df5a-ab7b-492e-aec3-64567013e2d2\") " Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.763949 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f218df5a-ab7b-492e-aec3-64567013e2d2" (UID: "f218df5a-ab7b-492e-aec3-64567013e2d2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.764121 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.764217 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca" (OuterVolumeSpecName: "client-ca") pod "f218df5a-ab7b-492e-aec3-64567013e2d2" (UID: "f218df5a-ab7b-492e-aec3-64567013e2d2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.764902 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config" (OuterVolumeSpecName: "config") pod "f218df5a-ab7b-492e-aec3-64567013e2d2" (UID: "f218df5a-ab7b-492e-aec3-64567013e2d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.769402 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp" (OuterVolumeSpecName: "kube-api-access-xgvhp") pod "f218df5a-ab7b-492e-aec3-64567013e2d2" (UID: "f218df5a-ab7b-492e-aec3-64567013e2d2"). InnerVolumeSpecName "kube-api-access-xgvhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.769750 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f218df5a-ab7b-492e-aec3-64567013e2d2" (UID: "f218df5a-ab7b-492e-aec3-64567013e2d2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.865727 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.865850 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f218df5a-ab7b-492e-aec3-64567013e2d2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.865864 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f218df5a-ab7b-492e-aec3-64567013e2d2-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:15 crc kubenswrapper[4745]: I0121 10:42:15.865877 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgvhp\" (UniqueName: \"kubernetes.io/projected/f218df5a-ab7b-492e-aec3-64567013e2d2-kube-api-access-xgvhp\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:16 crc kubenswrapper[4745]: I0121 10:42:16.257288 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-sffxv" Jan 21 10:42:16 crc kubenswrapper[4745]: I0121 10:42:16.295039 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-sffxv"] Jan 21 10:42:16 crc kubenswrapper[4745]: I0121 10:42:16.300355 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-sffxv"] Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.490044 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.491684 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.498886 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.498997 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.499096 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.499124 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.499508 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.502736 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.504118 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.504458 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.595441 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.595504 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9rvc\" (UniqueName: \"kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.595661 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.595747 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:17 crc kubenswrapper[4745]: I0121 10:42:17.595815 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.697753 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.697838 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.697872 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.697906 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9rvc\" (UniqueName: \"kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.697927 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.699511 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.699631 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.700152 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.705277 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.717261 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9rvc\" (UniqueName: \"kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc\") pod \"controller-manager-77d68bfdb-zbxjq\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:17.816949 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:18 crc kubenswrapper[4745]: I0121 10:42:18.012867 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f218df5a-ab7b-492e-aec3-64567013e2d2" path="/var/lib/kubelet/pods/f218df5a-ab7b-492e-aec3-64567013e2d2/volumes" Jan 21 10:42:18 crc kubenswrapper[4745]: E0121 10:42:18.863141 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:18 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50" Netns:"/var/run/netns/c030af3b-7f70-40cd-b7d2-48a643394041" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:18 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:18 crc kubenswrapper[4745]: > Jan 21 10:42:18 crc kubenswrapper[4745]: E0121 10:42:18.863236 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:18 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-8f6c6688d-jm9br_openshift-route-controller-manager_ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7_0(c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50): error adding pod openshift-route-controller-manager_route-controller-manager-8f6c6688d-jm9br to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50" Netns:"/var/run/netns/c030af3b-7f70-40cd-b7d2-48a643394041" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-8f6c6688d-jm9br;K8S_POD_INFRA_CONTAINER_ID=c3de6b332ff2157b51bccb6f0c3762abe47dd73cafefd76d01176c373c77ee50;K8S_POD_UID=ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br] networking: Multus: [openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod route-controller-manager-8f6c6688d-jm9br in out of cluster comm: pod "route-controller-manager-8f6c6688d-jm9br" not found Jan 21 10:42:18 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:18 crc kubenswrapper[4745]: > pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.277908 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.289566 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.423364 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config\") pod \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.423513 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert\") pod \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.423576 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxdmc\" (UniqueName: \"kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc\") pod \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.423654 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca\") pod \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\" (UID: \"ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7\") " Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.424296 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" (UID: "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.424421 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config" (OuterVolumeSpecName: "config") pod "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" (UID: "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.429272 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" (UID: "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.429279 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc" (OuterVolumeSpecName: "kube-api-access-wxdmc") pod "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" (UID: "ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7"). InnerVolumeSpecName "kube-api-access-wxdmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.525213 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.525272 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxdmc\" (UniqueName: \"kubernetes.io/projected/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-kube-api-access-wxdmc\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.525286 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:19 crc kubenswrapper[4745]: I0121 10:42:19.525302 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.283606 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.334910 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.335918 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.341084 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.341641 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.341666 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.341816 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.342032 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.342472 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.349789 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br"] Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.387960 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jm9br"] Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.390283 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.442384 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.446795 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.446885 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw8bl\" (UniqueName: \"kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.446924 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.549246 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.549309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.549348 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw8bl\" (UniqueName: \"kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.549379 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.550395 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.552584 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.554258 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.570329 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw8bl\" (UniqueName: \"kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl\") pod \"route-controller-manager-66d9b996-cq2hs\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: I0121 10:42:20.675428 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:20 crc kubenswrapper[4745]: E0121 10:42:20.774437 4745 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 21 10:42:20 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-77d68bfdb-zbxjq_openshift-controller-manager_a9b1a913-7711-478b-89e3-df8371ea5012_0(f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7): error adding pod openshift-controller-manager_controller-manager-77d68bfdb-zbxjq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7" Netns:"/var/run/netns/03c688b4-0ef1-42ff-9623-487697d71359" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-77d68bfdb-zbxjq;K8S_POD_INFRA_CONTAINER_ID=f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7;K8S_POD_UID=a9b1a913-7711-478b-89e3-df8371ea5012" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq] networking: Multus: [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq/a9b1a913-7711-478b-89e3-df8371ea5012]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-77d68bfdb-zbxjq in out of cluster comm: pod "controller-manager-77d68bfdb-zbxjq" not found Jan 21 10:42:20 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:20 crc kubenswrapper[4745]: > Jan 21 10:42:20 crc kubenswrapper[4745]: E0121 10:42:20.775107 4745 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 21 10:42:20 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-77d68bfdb-zbxjq_openshift-controller-manager_a9b1a913-7711-478b-89e3-df8371ea5012_0(f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7): error adding pod openshift-controller-manager_controller-manager-77d68bfdb-zbxjq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7" Netns:"/var/run/netns/03c688b4-0ef1-42ff-9623-487697d71359" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-77d68bfdb-zbxjq;K8S_POD_INFRA_CONTAINER_ID=f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7;K8S_POD_UID=a9b1a913-7711-478b-89e3-df8371ea5012" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq] networking: Multus: [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq/a9b1a913-7711-478b-89e3-df8371ea5012]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-77d68bfdb-zbxjq in out of cluster comm: pod "controller-manager-77d68bfdb-zbxjq" not found Jan 21 10:42:20 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:20 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:20 crc kubenswrapper[4745]: E0121 10:42:20.775137 4745 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 21 10:42:20 crc kubenswrapper[4745]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-77d68bfdb-zbxjq_openshift-controller-manager_a9b1a913-7711-478b-89e3-df8371ea5012_0(f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7): error adding pod openshift-controller-manager_controller-manager-77d68bfdb-zbxjq to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7" Netns:"/var/run/netns/03c688b4-0ef1-42ff-9623-487697d71359" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-77d68bfdb-zbxjq;K8S_POD_INFRA_CONTAINER_ID=f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7;K8S_POD_UID=a9b1a913-7711-478b-89e3-df8371ea5012" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq] networking: Multus: [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq/a9b1a913-7711-478b-89e3-df8371ea5012]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-77d68bfdb-zbxjq in out of cluster comm: pod "controller-manager-77d68bfdb-zbxjq" not found Jan 21 10:42:20 crc kubenswrapper[4745]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 21 10:42:20 crc kubenswrapper[4745]: > pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:20 crc kubenswrapper[4745]: E0121 10:42:20.775225 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-77d68bfdb-zbxjq_openshift-controller-manager(a9b1a913-7711-478b-89e3-df8371ea5012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-77d68bfdb-zbxjq_openshift-controller-manager(a9b1a913-7711-478b-89e3-df8371ea5012)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-77d68bfdb-zbxjq_openshift-controller-manager_a9b1a913-7711-478b-89e3-df8371ea5012_0(f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7): error adding pod openshift-controller-manager_controller-manager-77d68bfdb-zbxjq to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7\\\" Netns:\\\"/var/run/netns/03c688b4-0ef1-42ff-9623-487697d71359\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-77d68bfdb-zbxjq;K8S_POD_INFRA_CONTAINER_ID=f5cd1bcdfa64e90fad5340d7b475930093a7bc8c12a6ae93d9621d9c57b667d7;K8S_POD_UID=a9b1a913-7711-478b-89e3-df8371ea5012\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq] networking: Multus: [openshift-controller-manager/controller-manager-77d68bfdb-zbxjq/a9b1a913-7711-478b-89e3-df8371ea5012]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod controller-manager-77d68bfdb-zbxjq in out of cluster comm: pod \\\"controller-manager-77d68bfdb-zbxjq\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" Jan 21 10:42:21 crc kubenswrapper[4745]: I0121 10:42:21.289727 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:21 crc kubenswrapper[4745]: I0121 10:42:21.290319 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:22 crc kubenswrapper[4745]: I0121 10:42:22.008389 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7" path="/var/lib/kubelet/pods/ab4191dc-1c8c-4cc2-a1ee-273eb109f5e7/volumes" Jan 21 10:42:22 crc kubenswrapper[4745]: I0121 10:42:22.202991 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:42:22 crc kubenswrapper[4745]: W0121 10:42:22.211562 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b1a913_7711_478b_89e3_df8371ea5012.slice/crio-9db3770c7f2378ece3022e654ec8032835024e6bfafae702441ee7b4cf48da7e WatchSource:0}: Error finding container 9db3770c7f2378ece3022e654ec8032835024e6bfafae702441ee7b4cf48da7e: Status 404 returned error can't find the container with id 9db3770c7f2378ece3022e654ec8032835024e6bfafae702441ee7b4cf48da7e Jan 21 10:42:22 crc kubenswrapper[4745]: I0121 10:42:22.261845 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:42:22 crc kubenswrapper[4745]: W0121 10:42:22.265698 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49d19254_1c54_4bc7_8501_329543bd9763.slice/crio-3dab45e7b37d9876fdcf5f93d6f52e97dab4c3ebc015e015e62bde34b684dcb5 WatchSource:0}: Error finding container 3dab45e7b37d9876fdcf5f93d6f52e97dab4c3ebc015e015e62bde34b684dcb5: Status 404 returned error can't find the container with id 3dab45e7b37d9876fdcf5f93d6f52e97dab4c3ebc015e015e62bde34b684dcb5 Jan 21 10:42:22 crc kubenswrapper[4745]: I0121 10:42:22.302746 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" event={"ID":"49d19254-1c54-4bc7-8501-329543bd9763","Type":"ContainerStarted","Data":"3dab45e7b37d9876fdcf5f93d6f52e97dab4c3ebc015e015e62bde34b684dcb5"} Jan 21 10:42:22 crc kubenswrapper[4745]: I0121 10:42:22.304691 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" event={"ID":"a9b1a913-7711-478b-89e3-df8371ea5012","Type":"ContainerStarted","Data":"9db3770c7f2378ece3022e654ec8032835024e6bfafae702441ee7b4cf48da7e"} Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.313185 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" event={"ID":"49d19254-1c54-4bc7-8501-329543bd9763","Type":"ContainerStarted","Data":"2412fb798e54676e50cf0be0c5dcb626561a0dce0388f41a091b71575d8dd852"} Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.313790 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.315863 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" event={"ID":"a9b1a913-7711-478b-89e3-df8371ea5012","Type":"ContainerStarted","Data":"f5f1cdaecaf484334296c5fc592b1866ca9b1825c0ec45a715ba085402e3be7d"} Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.316075 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.321286 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.323196 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.368827 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" podStartSLOduration=8.36878946 podStartE2EDuration="8.36878946s" podCreationTimestamp="2026-01-21 10:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:42:23.367972587 +0000 UTC m=+327.828760205" watchObservedRunningTime="2026-01-21 10:42:23.36878946 +0000 UTC m=+327.829577058" Jan 21 10:42:23 crc kubenswrapper[4745]: I0121 10:42:23.372733 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" podStartSLOduration=8.372710304 podStartE2EDuration="8.372710304s" podCreationTimestamp="2026-01-21 10:42:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:42:23.340344809 +0000 UTC m=+327.801132417" watchObservedRunningTime="2026-01-21 10:42:23.372710304 +0000 UTC m=+327.833497902" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.818313 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4c4t9"] Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.819501 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.852657 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4c4t9"] Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.998749 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-registry-tls\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.998856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.998984 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/727683cc-8cb2-490b-952b-700fc7e633e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.999097 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flzq\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-kube-api-access-4flzq\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.999170 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-trusted-ca\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.999224 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-bound-sa-token\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.999272 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-registry-certificates\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:11 crc kubenswrapper[4745]: I0121 10:43:11.999340 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/727683cc-8cb2-490b-952b-700fc7e633e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.032737 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.100702 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/727683cc-8cb2-490b-952b-700fc7e633e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101097 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-registry-tls\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101243 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/727683cc-8cb2-490b-952b-700fc7e633e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101371 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4flzq\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-kube-api-access-4flzq\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101468 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-trusted-ca\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101541 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-bound-sa-token\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.101587 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-registry-certificates\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.103049 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-trusted-ca\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.103343 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/727683cc-8cb2-490b-952b-700fc7e633e7-ca-trust-extracted\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.103517 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/727683cc-8cb2-490b-952b-700fc7e633e7-registry-certificates\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.110719 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-registry-tls\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.110921 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/727683cc-8cb2-490b-952b-700fc7e633e7-installation-pull-secrets\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.131285 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-bound-sa-token\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.134853 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4flzq\" (UniqueName: \"kubernetes.io/projected/727683cc-8cb2-490b-952b-700fc7e633e7-kube-api-access-4flzq\") pod \"image-registry-66df7c8f76-4c4t9\" (UID: \"727683cc-8cb2-490b-952b-700fc7e633e7\") " pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.138456 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:12 crc kubenswrapper[4745]: I0121 10:43:12.610804 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-4c4t9"] Jan 21 10:43:13 crc kubenswrapper[4745]: I0121 10:43:13.633123 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" event={"ID":"727683cc-8cb2-490b-952b-700fc7e633e7","Type":"ContainerStarted","Data":"6e427cbe4a1815369544764b733462152fdf792a7c92a657aee5e9e08caffb92"} Jan 21 10:43:14 crc kubenswrapper[4745]: I0121 10:43:14.641194 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" event={"ID":"727683cc-8cb2-490b-952b-700fc7e633e7","Type":"ContainerStarted","Data":"a14085c602c5c609dbe1f01e55480a811be28762e9b4214b28cfc5149bde7533"} Jan 21 10:43:14 crc kubenswrapper[4745]: I0121 10:43:14.641358 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:14 crc kubenswrapper[4745]: I0121 10:43:14.669985 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" podStartSLOduration=3.669953678 podStartE2EDuration="3.669953678s" podCreationTimestamp="2026-01-21 10:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:43:14.664103509 +0000 UTC m=+379.124891107" watchObservedRunningTime="2026-01-21 10:43:14.669953678 +0000 UTC m=+379.130741276" Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.623572 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.623879 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" containerName="controller-manager" containerID="cri-o://f5f1cdaecaf484334296c5fc592b1866ca9b1825c0ec45a715ba085402e3be7d" gracePeriod=30 Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.648965 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.649261 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" podUID="49d19254-1c54-4bc7-8501-329543bd9763" containerName="route-controller-manager" containerID="cri-o://2412fb798e54676e50cf0be0c5dcb626561a0dce0388f41a091b71575d8dd852" gracePeriod=30 Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.866690 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:43:15 crc kubenswrapper[4745]: I0121 10:43:15.867264 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:43:16 crc kubenswrapper[4745]: I0121 10:43:16.655424 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9b1a913-7711-478b-89e3-df8371ea5012" containerID="f5f1cdaecaf484334296c5fc592b1866ca9b1825c0ec45a715ba085402e3be7d" exitCode=0 Jan 21 10:43:16 crc kubenswrapper[4745]: I0121 10:43:16.655517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" event={"ID":"a9b1a913-7711-478b-89e3-df8371ea5012","Type":"ContainerDied","Data":"f5f1cdaecaf484334296c5fc592b1866ca9b1825c0ec45a715ba085402e3be7d"} Jan 21 10:43:16 crc kubenswrapper[4745]: I0121 10:43:16.658388 4745 generic.go:334] "Generic (PLEG): container finished" podID="49d19254-1c54-4bc7-8501-329543bd9763" containerID="2412fb798e54676e50cf0be0c5dcb626561a0dce0388f41a091b71575d8dd852" exitCode=0 Jan 21 10:43:16 crc kubenswrapper[4745]: I0121 10:43:16.658463 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" event={"ID":"49d19254-1c54-4bc7-8501-329543bd9763","Type":"ContainerDied","Data":"2412fb798e54676e50cf0be0c5dcb626561a0dce0388f41a091b71575d8dd852"} Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.340024 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.345824 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.378421 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-47fdd"] Jan 21 10:43:17 crc kubenswrapper[4745]: E0121 10:43:17.378675 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49d19254-1c54-4bc7-8501-329543bd9763" containerName="route-controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.378691 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="49d19254-1c54-4bc7-8501-329543bd9763" containerName="route-controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: E0121 10:43:17.378709 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" containerName="controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.378716 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" containerName="controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.378807 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="49d19254-1c54-4bc7-8501-329543bd9763" containerName="route-controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.378826 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" containerName="controller-manager" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.379191 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.397724 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-47fdd"] Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.506843 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config\") pod \"49d19254-1c54-4bc7-8501-329543bd9763\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.506932 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca\") pod \"49d19254-1c54-4bc7-8501-329543bd9763\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.506980 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles\") pod \"a9b1a913-7711-478b-89e3-df8371ea5012\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507006 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9rvc\" (UniqueName: \"kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc\") pod \"a9b1a913-7711-478b-89e3-df8371ea5012\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507042 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca\") pod \"a9b1a913-7711-478b-89e3-df8371ea5012\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507070 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert\") pod \"a9b1a913-7711-478b-89e3-df8371ea5012\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507107 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw8bl\" (UniqueName: \"kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl\") pod \"49d19254-1c54-4bc7-8501-329543bd9763\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507169 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config\") pod \"a9b1a913-7711-478b-89e3-df8371ea5012\" (UID: \"a9b1a913-7711-478b-89e3-df8371ea5012\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507194 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert\") pod \"49d19254-1c54-4bc7-8501-329543bd9763\" (UID: \"49d19254-1c54-4bc7-8501-329543bd9763\") " Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca" (OuterVolumeSpecName: "client-ca") pod "a9b1a913-7711-478b-89e3-df8371ea5012" (UID: "a9b1a913-7711-478b-89e3-df8371ea5012"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507999 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a9b1a913-7711-478b-89e3-df8371ea5012" (UID: "a9b1a913-7711-478b-89e3-df8371ea5012"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508050 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca" (OuterVolumeSpecName: "client-ca") pod "49d19254-1c54-4bc7-8501-329543bd9763" (UID: "49d19254-1c54-4bc7-8501-329543bd9763"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.507374 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-serving-cert\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508195 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config" (OuterVolumeSpecName: "config") pod "49d19254-1c54-4bc7-8501-329543bd9763" (UID: "49d19254-1c54-4bc7-8501-329543bd9763"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508388 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508631 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdtv7\" (UniqueName: \"kubernetes.io/projected/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-kube-api-access-wdtv7\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508684 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-client-ca\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508721 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-config\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508786 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508813 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/49d19254-1c54-4bc7-8501-329543bd9763-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508781 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config" (OuterVolumeSpecName: "config") pod "a9b1a913-7711-478b-89e3-df8371ea5012" (UID: "a9b1a913-7711-478b-89e3-df8371ea5012"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508826 4745 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.508909 4745 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-client-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.514881 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "49d19254-1c54-4bc7-8501-329543bd9763" (UID: "49d19254-1c54-4bc7-8501-329543bd9763"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.515995 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl" (OuterVolumeSpecName: "kube-api-access-fw8bl") pod "49d19254-1c54-4bc7-8501-329543bd9763" (UID: "49d19254-1c54-4bc7-8501-329543bd9763"). InnerVolumeSpecName "kube-api-access-fw8bl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.521142 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc" (OuterVolumeSpecName: "kube-api-access-p9rvc") pod "a9b1a913-7711-478b-89e3-df8371ea5012" (UID: "a9b1a913-7711-478b-89e3-df8371ea5012"). InnerVolumeSpecName "kube-api-access-p9rvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.522174 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a9b1a913-7711-478b-89e3-df8371ea5012" (UID: "a9b1a913-7711-478b-89e3-df8371ea5012"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609559 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-serving-cert\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609610 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609662 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdtv7\" (UniqueName: \"kubernetes.io/projected/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-kube-api-access-wdtv7\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609696 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-client-ca\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609721 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-config\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609768 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9rvc\" (UniqueName: \"kubernetes.io/projected/a9b1a913-7711-478b-89e3-df8371ea5012-kube-api-access-p9rvc\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609782 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a9b1a913-7711-478b-89e3-df8371ea5012-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609792 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw8bl\" (UniqueName: \"kubernetes.io/projected/49d19254-1c54-4bc7-8501-329543bd9763-kube-api-access-fw8bl\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609801 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9b1a913-7711-478b-89e3-df8371ea5012-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.609810 4745 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/49d19254-1c54-4bc7-8501-329543bd9763-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.611641 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-config\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.611761 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-proxy-ca-bundles\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.613211 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-client-ca\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.614284 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-serving-cert\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.631829 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdtv7\" (UniqueName: \"kubernetes.io/projected/4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92-kube-api-access-wdtv7\") pod \"controller-manager-7f967c9988-47fdd\" (UID: \"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92\") " pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.666608 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.666593 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs" event={"ID":"49d19254-1c54-4bc7-8501-329543bd9763","Type":"ContainerDied","Data":"3dab45e7b37d9876fdcf5f93d6f52e97dab4c3ebc015e015e62bde34b684dcb5"} Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.668217 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" event={"ID":"a9b1a913-7711-478b-89e3-df8371ea5012","Type":"ContainerDied","Data":"9db3770c7f2378ece3022e654ec8032835024e6bfafae702441ee7b4cf48da7e"} Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.668283 4745 scope.go:117] "RemoveContainer" containerID="2412fb798e54676e50cf0be0c5dcb626561a0dce0388f41a091b71575d8dd852" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.668500 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68bfdb-zbxjq" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.686868 4745 scope.go:117] "RemoveContainer" containerID="f5f1cdaecaf484334296c5fc592b1866ca9b1825c0ec45a715ba085402e3be7d" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.704366 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.711638 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.716514 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77d68bfdb-zbxjq"] Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.733777 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:43:17 crc kubenswrapper[4745]: I0121 10:43:17.740888 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-66d9b996-cq2hs"] Jan 21 10:43:18 crc kubenswrapper[4745]: I0121 10:43:18.008871 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49d19254-1c54-4bc7-8501-329543bd9763" path="/var/lib/kubelet/pods/49d19254-1c54-4bc7-8501-329543bd9763/volumes" Jan 21 10:43:18 crc kubenswrapper[4745]: I0121 10:43:18.009854 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b1a913-7711-478b-89e3-df8371ea5012" path="/var/lib/kubelet/pods/a9b1a913-7711-478b-89e3-df8371ea5012/volumes" Jan 21 10:43:18 crc kubenswrapper[4745]: I0121 10:43:18.259132 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f967c9988-47fdd"] Jan 21 10:43:18 crc kubenswrapper[4745]: I0121 10:43:18.677734 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" event={"ID":"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92","Type":"ContainerStarted","Data":"883a864d12b05edbf641bedad81602042931b1d14d362bac15b37f5aeb950e7e"} Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.529565 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv"] Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.530566 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.533284 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.533644 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.533869 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.533979 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.534133 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.535186 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.551218 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv"] Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.640258 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw5q9\" (UniqueName: \"kubernetes.io/projected/7c4e8d39-76b3-475f-8d61-de34c3436ffc-kube-api-access-bw5q9\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.640384 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4e8d39-76b3-475f-8d61-de34c3436ffc-serving-cert\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.640419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-config\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.640462 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-client-ca\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.742057 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4e8d39-76b3-475f-8d61-de34c3436ffc-serving-cert\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.742123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-config\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.742167 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-client-ca\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.742204 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw5q9\" (UniqueName: \"kubernetes.io/projected/7c4e8d39-76b3-475f-8d61-de34c3436ffc-kube-api-access-bw5q9\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.743872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-config\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.744199 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c4e8d39-76b3-475f-8d61-de34c3436ffc-client-ca\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.750809 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c4e8d39-76b3-475f-8d61-de34c3436ffc-serving-cert\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.760700 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw5q9\" (UniqueName: \"kubernetes.io/projected/7c4e8d39-76b3-475f-8d61-de34c3436ffc-kube-api-access-bw5q9\") pod \"route-controller-manager-8f6c6688d-jbcdv\" (UID: \"7c4e8d39-76b3-475f-8d61-de34c3436ffc\") " pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:19 crc kubenswrapper[4745]: I0121 10:43:19.853846 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.173374 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv"] Jan 21 10:43:20 crc kubenswrapper[4745]: W0121 10:43:20.183155 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c4e8d39_76b3_475f_8d61_de34c3436ffc.slice/crio-4f7d1ce21f4886eb478e7d430acb1db7e2832bcd746a43998bb95bdf9562adc5 WatchSource:0}: Error finding container 4f7d1ce21f4886eb478e7d430acb1db7e2832bcd746a43998bb95bdf9562adc5: Status 404 returned error can't find the container with id 4f7d1ce21f4886eb478e7d430acb1db7e2832bcd746a43998bb95bdf9562adc5 Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.689353 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" event={"ID":"7c4e8d39-76b3-475f-8d61-de34c3436ffc","Type":"ContainerStarted","Data":"c18a92a389ffc8e90d8a049fa40dc60864e06d7e5079b3da1d26b8ee353bec08"} Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.689613 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" event={"ID":"7c4e8d39-76b3-475f-8d61-de34c3436ffc","Type":"ContainerStarted","Data":"4f7d1ce21f4886eb478e7d430acb1db7e2832bcd746a43998bb95bdf9562adc5"} Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.689884 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.691003 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" event={"ID":"4a3aa3fe-94bc-4ccb-9447-de22f0ed9e92","Type":"ContainerStarted","Data":"35fd24b0bb3b383e8b58996ac9134bcd78e340121fcfb9e3e40ff28bd2023daf"} Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.691295 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.704362 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.708286 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" podStartSLOduration=5.70827653 podStartE2EDuration="5.70827653s" podCreationTimestamp="2026-01-21 10:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:43:20.707499588 +0000 UTC m=+385.168287186" watchObservedRunningTime="2026-01-21 10:43:20.70827653 +0000 UTC m=+385.169064128" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.923952 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" Jan 21 10:43:20 crc kubenswrapper[4745]: I0121 10:43:20.951360 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f967c9988-47fdd" podStartSLOduration=5.951331256 podStartE2EDuration="5.951331256s" podCreationTimestamp="2026-01-21 10:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:43:20.730955486 +0000 UTC m=+385.191743084" watchObservedRunningTime="2026-01-21 10:43:20.951331256 +0000 UTC m=+385.412118854" Jan 21 10:43:32 crc kubenswrapper[4745]: I0121 10:43:32.147104 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-4c4t9" Jan 21 10:43:32 crc kubenswrapper[4745]: I0121 10:43:32.542498 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:43:39 crc kubenswrapper[4745]: I0121 10:43:39.992354 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:43:39 crc kubenswrapper[4745]: I0121 10:43:39.996015 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c6pc4" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="registry-server" containerID="cri-o://2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64" gracePeriod=30 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.008376 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.008733 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-52d7q" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="registry-server" containerID="cri-o://c765dc1d997c11db6920633421833c361eeba7f72d7e6bb7f8bab33263a2304d" gracePeriod=30 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.024301 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.024518 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" containerID="cri-o://54195a2c3c6db705824f88ec8d350e9918b296e763b6ac307428033a2a0d69c9" gracePeriod=30 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.040027 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.040298 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rgqts" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="registry-server" containerID="cri-o://c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" gracePeriod=30 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.049489 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.051229 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7989r" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="registry-server" containerID="cri-o://3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" gracePeriod=30 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.060088 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gkrg9"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.061374 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.080832 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gkrg9"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.166315 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.166389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkx7x\" (UniqueName: \"kubernetes.io/projected/b3ae4633-cf73-4280-8cac-28ff7399bede-kube-api-access-hkx7x\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.167682 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.192295 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b is running failed: container process not found" containerID="c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.193025 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b is running failed: container process not found" containerID="c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.201568 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b is running failed: container process not found" containerID="c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.201657 4745 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-rgqts" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="registry-server" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.269158 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.269471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.269498 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkx7x\" (UniqueName: \"kubernetes.io/projected/b3ae4633-cf73-4280-8cac-28ff7399bede-kube-api-access-hkx7x\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.271407 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.278694 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b3ae4633-cf73-4280-8cac-28ff7399bede-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.289868 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkx7x\" (UniqueName: \"kubernetes.io/projected/b3ae4633-cf73-4280-8cac-28ff7399bede-kube-api-access-hkx7x\") pod \"marketplace-operator-79b997595-gkrg9\" (UID: \"b3ae4633-cf73-4280-8cac-28ff7399bede\") " pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.401307 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.601171 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.675456 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content\") pod \"9d721ed0-4c33-4912-8973-e583db1e2075\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.675600 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities\") pod \"9d721ed0-4c33-4912-8973-e583db1e2075\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.675630 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj44x\" (UniqueName: \"kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x\") pod \"9d721ed0-4c33-4912-8973-e583db1e2075\" (UID: \"9d721ed0-4c33-4912-8973-e583db1e2075\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.677238 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities" (OuterVolumeSpecName: "utilities") pod "9d721ed0-4c33-4912-8973-e583db1e2075" (UID: "9d721ed0-4c33-4912-8973-e583db1e2075"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.681499 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x" (OuterVolumeSpecName: "kube-api-access-fj44x") pod "9d721ed0-4c33-4912-8973-e583db1e2075" (UID: "9d721ed0-4c33-4912-8973-e583db1e2075"). InnerVolumeSpecName "kube-api-access-fj44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.729844 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d721ed0-4c33-4912-8973-e583db1e2075" (UID: "9d721ed0-4c33-4912-8973-e583db1e2075"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.779333 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.780126 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d721ed0-4c33-4912-8973-e583db1e2075-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.780461 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj44x\" (UniqueName: \"kubernetes.io/projected/9d721ed0-4c33-4912-8973-e583db1e2075-kube-api-access-fj44x\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.817021 4745 generic.go:334] "Generic (PLEG): container finished" podID="9d721ed0-4c33-4912-8973-e583db1e2075" containerID="2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64" exitCode=0 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.817087 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerDied","Data":"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.817115 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c6pc4" event={"ID":"9d721ed0-4c33-4912-8973-e583db1e2075","Type":"ContainerDied","Data":"0dfb330a58249d562bbc6573d68d6a06acd60f851f06b6bdfa084551c8bd3183"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.817195 4745 scope.go:117] "RemoveContainer" containerID="2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.817606 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c6pc4" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.822626 4745 generic.go:334] "Generic (PLEG): container finished" podID="db0e48bf-347d-4985-b809-a25cc11db944" containerID="54195a2c3c6db705824f88ec8d350e9918b296e763b6ac307428033a2a0d69c9" exitCode=0 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.822733 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" event={"ID":"db0e48bf-347d-4985-b809-a25cc11db944","Type":"ContainerDied","Data":"54195a2c3c6db705824f88ec8d350e9918b296e763b6ac307428033a2a0d69c9"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.830678 4745 generic.go:334] "Generic (PLEG): container finished" podID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerID="3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" exitCode=0 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.830921 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerDied","Data":"3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.870636 4745 generic.go:334] "Generic (PLEG): container finished" podID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerID="c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" exitCode=0 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.870760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerDied","Data":"c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.876313 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.886240 4745 generic.go:334] "Generic (PLEG): container finished" podID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerID="c765dc1d997c11db6920633421833c361eeba7f72d7e6bb7f8bab33263a2304d" exitCode=0 Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.886301 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerDied","Data":"c765dc1d997c11db6920633421833c361eeba7f72d7e6bb7f8bab33263a2304d"} Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.887605 4745 scope.go:117] "RemoveContainer" containerID="30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.892768 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c6pc4"] Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.907657 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.915915 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.922849 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.923854 4745 scope.go:117] "RemoveContainer" containerID="0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4" Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.961278 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada is running failed: container process not found" containerID="3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.961850 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada is running failed: container process not found" containerID="3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.962107 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada is running failed: container process not found" containerID="3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" cmd=["grpc_health_probe","-addr=:50051"] Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.962137 4745 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-7989r" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="registry-server" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.963836 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.972024 4745 scope.go:117] "RemoveContainer" containerID="2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64" Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.972723 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64\": container with ID starting with 2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64 not found: ID does not exist" containerID="2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.972760 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64"} err="failed to get container status \"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64\": rpc error: code = NotFound desc = could not find container \"2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64\": container with ID starting with 2e6988704d39dce67c30f703936e943f69ef9bdb0af68c6057a65b37cc0f7b64 not found: ID does not exist" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.972866 4745 scope.go:117] "RemoveContainer" containerID="30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54" Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.976008 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54\": container with ID starting with 30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54 not found: ID does not exist" containerID="30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.976080 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54"} err="failed to get container status \"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54\": rpc error: code = NotFound desc = could not find container \"30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54\": container with ID starting with 30c2abe57fe9791dfa771fd7416e7ebbd0291ef8254003b7c164aaf258292a54 not found: ID does not exist" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.976119 4745 scope.go:117] "RemoveContainer" containerID="0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4" Jan 21 10:43:40 crc kubenswrapper[4745]: E0121 10:43:40.977598 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4\": container with ID starting with 0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4 not found: ID does not exist" containerID="0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.977713 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4"} err="failed to get container status \"0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4\": rpc error: code = NotFound desc = could not find container \"0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4\": container with ID starting with 0666094b5579eff7511dc08f95010134fb0ddca95334b060b2ad31cd44c368a4 not found: ID does not exist" Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985045 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mzl6\" (UniqueName: \"kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6\") pod \"db0e48bf-347d-4985-b809-a25cc11db944\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985103 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content\") pod \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985151 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics\") pod \"db0e48bf-347d-4985-b809-a25cc11db944\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985172 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities\") pod \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985207 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hk28m\" (UniqueName: \"kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m\") pod \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.985289 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca\") pod \"db0e48bf-347d-4985-b809-a25cc11db944\" (UID: \"db0e48bf-347d-4985-b809-a25cc11db944\") " Jan 21 10:43:40 crc kubenswrapper[4745]: I0121 10:43:40.986013 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249dm\" (UniqueName: \"kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm\") pod \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:40.986690 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "db0e48bf-347d-4985-b809-a25cc11db944" (UID: "db0e48bf-347d-4985-b809-a25cc11db944"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:40.987932 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities" (OuterVolumeSpecName: "utilities") pod "be69561a-c25a-4e96-b75f-4f5664c5f2c4" (UID: "be69561a-c25a-4e96-b75f-4f5664c5f2c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.009122 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content\") pod \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\" (UID: \"be69561a-c25a-4e96-b75f-4f5664c5f2c4\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.021316 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm" (OuterVolumeSpecName: "kube-api-access-249dm") pod "be69561a-c25a-4e96-b75f-4f5664c5f2c4" (UID: "be69561a-c25a-4e96-b75f-4f5664c5f2c4"). InnerVolumeSpecName "kube-api-access-249dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.021778 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "db0e48bf-347d-4985-b809-a25cc11db944" (UID: "db0e48bf-347d-4985-b809-a25cc11db944"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.023083 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m" (OuterVolumeSpecName: "kube-api-access-hk28m") pod "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" (UID: "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed"). InnerVolumeSpecName "kube-api-access-hk28m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.023910 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6" (OuterVolumeSpecName: "kube-api-access-5mzl6") pod "db0e48bf-347d-4985-b809-a25cc11db944" (UID: "db0e48bf-347d-4985-b809-a25cc11db944"). InnerVolumeSpecName "kube-api-access-5mzl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.029275 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities\") pod \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\" (UID: \"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030118 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030148 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hk28m\" (UniqueName: \"kubernetes.io/projected/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-kube-api-access-hk28m\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030161 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030174 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249dm\" (UniqueName: \"kubernetes.io/projected/be69561a-c25a-4e96-b75f-4f5664c5f2c4-kube-api-access-249dm\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030184 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mzl6\" (UniqueName: \"kubernetes.io/projected/db0e48bf-347d-4985-b809-a25cc11db944-kube-api-access-5mzl6\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.030195 4745 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/db0e48bf-347d-4985-b809-a25cc11db944-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.032989 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities" (OuterVolumeSpecName: "utilities") pod "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" (UID: "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.039543 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" (UID: "d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.065509 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be69561a-c25a-4e96-b75f-4f5664c5f2c4" (UID: "be69561a-c25a-4e96-b75f-4f5664c5f2c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.131595 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lvhd\" (UniqueName: \"kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd\") pod \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.132023 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content\") pod \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.132245 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities\") pod \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\" (UID: \"131ae967-4e30-4b48-a2c7-fdcfc1109db8\") " Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.132883 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be69561a-c25a-4e96-b75f-4f5664c5f2c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.132976 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.133025 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities" (OuterVolumeSpecName: "utilities") pod "131ae967-4e30-4b48-a2c7-fdcfc1109db8" (UID: "131ae967-4e30-4b48-a2c7-fdcfc1109db8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.133057 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.136413 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd" (OuterVolumeSpecName: "kube-api-access-7lvhd") pod "131ae967-4e30-4b48-a2c7-fdcfc1109db8" (UID: "131ae967-4e30-4b48-a2c7-fdcfc1109db8"). InnerVolumeSpecName "kube-api-access-7lvhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.165902 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-gkrg9"] Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.234109 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.234147 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lvhd\" (UniqueName: \"kubernetes.io/projected/131ae967-4e30-4b48-a2c7-fdcfc1109db8-kube-api-access-7lvhd\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.275290 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "131ae967-4e30-4b48-a2c7-fdcfc1109db8" (UID: "131ae967-4e30-4b48-a2c7-fdcfc1109db8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.335744 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/131ae967-4e30-4b48-a2c7-fdcfc1109db8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.894552 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" event={"ID":"b3ae4633-cf73-4280-8cac-28ff7399bede","Type":"ContainerStarted","Data":"49e7ddf164a57185f07bb5fdb27bb99dcaaa30080335874e9db8d0b8e51bf8cd"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.895084 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.895116 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" event={"ID":"b3ae4633-cf73-4280-8cac-28ff7399bede","Type":"ContainerStarted","Data":"a322a86702efca0b638c98fe91c59b1783cc596b4dfec46722fcc06911b3d692"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.899761 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.900016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-52d7q" event={"ID":"be69561a-c25a-4e96-b75f-4f5664c5f2c4","Type":"ContainerDied","Data":"40c1bfc4568a4454d4aeb61f635e1a8b1c2e3039caaa63a5cff1961c3d81cce8"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.900070 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-52d7q" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.900068 4745 scope.go:117] "RemoveContainer" containerID="c765dc1d997c11db6920633421833c361eeba7f72d7e6bb7f8bab33263a2304d" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.909799 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" event={"ID":"db0e48bf-347d-4985-b809-a25cc11db944","Type":"ContainerDied","Data":"59dea8809b5d2f951f1b10ea4da9f83675c5071ac03a73eaaacee22c2f31328a"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.909951 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-fcg2s" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.919311 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-gkrg9" podStartSLOduration=1.919284601 podStartE2EDuration="1.919284601s" podCreationTimestamp="2026-01-21 10:43:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:43:41.919159688 +0000 UTC m=+406.379947286" watchObservedRunningTime="2026-01-21 10:43:41.919284601 +0000 UTC m=+406.380072199" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.937463 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7989r" event={"ID":"131ae967-4e30-4b48-a2c7-fdcfc1109db8","Type":"ContainerDied","Data":"2a19956830b8dde330a56516496e14a1d6407c37bd600a5fa7df240c689e0c17"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.937632 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7989r" Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.964038 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rgqts" event={"ID":"d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed","Type":"ContainerDied","Data":"1fd0a032fdeeca86471924714a7681e8913f24f212fdea214e80b509f4f931d1"} Jan 21 10:43:41 crc kubenswrapper[4745]: I0121 10:43:41.964166 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rgqts" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.003715 4745 scope.go:117] "RemoveContainer" containerID="071b7a6004358557713e215dd2c7d14d199c910919d090bd4d06dd50ea87ccec" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.034183 4745 scope.go:117] "RemoveContainer" containerID="42d7a305954bf9870efb69feca1afd24ac45a65ec6e56e90ec0ad99cb436f6c5" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.054468 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" path="/var/lib/kubelet/pods/9d721ed0-4c33-4912-8973-e583db1e2075/volumes" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.056864 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.065710 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-fcg2s"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.072221 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.076729 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-52d7q"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.081411 4745 scope.go:117] "RemoveContainer" containerID="54195a2c3c6db705824f88ec8d350e9918b296e763b6ac307428033a2a0d69c9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.095868 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.105115 4745 scope.go:117] "RemoveContainer" containerID="3fcb452db5debdf09d627847654337bed08e7515a5f5c582440a31a2f2267ada" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.106645 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rgqts"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.112332 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.117595 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7989r"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.120713 4745 scope.go:117] "RemoveContainer" containerID="a5d2bbd831a6c6cff749fbcd5933ba50ddae76cbac2267670ab20f03ca3a4036" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.138468 4745 scope.go:117] "RemoveContainer" containerID="a89d58e18f1e2458538c7f6c2bf76375e18a7925dc4e4b8e2faf0b66d5d5b5ee" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.167853 4745 scope.go:117] "RemoveContainer" containerID="c7768afd6c73b5ad07fe2c5c473de3de2b2fba5070083afb05499d5daa26eb9b" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.181510 4745 scope.go:117] "RemoveContainer" containerID="eed807207bb05a33b2d34605f3c43a6287a86d10f97e199fd07e5d504de683ac" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210613 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-57bm9"] Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210875 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210892 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210907 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210916 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210926 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210934 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210945 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210954 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210963 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210971 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.210980 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.210989 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.211000 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.211008 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.211020 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.211027 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.212323 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212345 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.212361 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212371 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="extract-content" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.212385 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212393 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.212405 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212413 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="extract-utilities" Jan 21 10:43:42 crc kubenswrapper[4745]: E0121 10:43:42.212423 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212434 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212561 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212577 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212591 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d721ed0-4c33-4912-8973-e583db1e2075" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212603 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" containerName="registry-server" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.212617 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0e48bf-347d-4985-b809-a25cc11db944" containerName="marketplace-operator" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.213609 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.216157 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.222386 4745 scope.go:117] "RemoveContainer" containerID="49fbc28fa61864f6e8f108f29075c7b07e8310b35e54cfa8a43e9fe4cf9e5bc5" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.236154 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57bm9"] Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.364665 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-utilities\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.365122 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-catalog-content\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.365210 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7hc9\" (UniqueName: \"kubernetes.io/projected/ec4cd655-4062-4058-9de3-81d9ebb11d1b-kube-api-access-t7hc9\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.466858 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-catalog-content\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.466931 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7hc9\" (UniqueName: \"kubernetes.io/projected/ec4cd655-4062-4058-9de3-81d9ebb11d1b-kube-api-access-t7hc9\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.467003 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-utilities\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.467899 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-catalog-content\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.468020 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ec4cd655-4062-4058-9de3-81d9ebb11d1b-utilities\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.493665 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7hc9\" (UniqueName: \"kubernetes.io/projected/ec4cd655-4062-4058-9de3-81d9ebb11d1b-kube-api-access-t7hc9\") pod \"certified-operators-57bm9\" (UID: \"ec4cd655-4062-4058-9de3-81d9ebb11d1b\") " pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:42 crc kubenswrapper[4745]: I0121 10:43:42.574896 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:43 crc kubenswrapper[4745]: I0121 10:43:43.003760 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-57bm9"] Jan 21 10:43:43 crc kubenswrapper[4745]: W0121 10:43:43.014653 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podec4cd655_4062_4058_9de3_81d9ebb11d1b.slice/crio-1742750da022f941137b901b9aa708f88a38c8bd9d4faa9a9801209139814b33 WatchSource:0}: Error finding container 1742750da022f941137b901b9aa708f88a38c8bd9d4faa9a9801209139814b33: Status 404 returned error can't find the container with id 1742750da022f941137b901b9aa708f88a38c8bd9d4faa9a9801209139814b33 Jan 21 10:43:43 crc kubenswrapper[4745]: I0121 10:43:43.987817 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57bm9" event={"ID":"ec4cd655-4062-4058-9de3-81d9ebb11d1b","Type":"ContainerDied","Data":"3f97e4ed2b40f4eb88acddf29c1fc1e1424b5f973cc1b3c164a232846589e005"} Jan 21 10:43:43 crc kubenswrapper[4745]: I0121 10:43:43.989208 4745 generic.go:334] "Generic (PLEG): container finished" podID="ec4cd655-4062-4058-9de3-81d9ebb11d1b" containerID="3f97e4ed2b40f4eb88acddf29c1fc1e1424b5f973cc1b3c164a232846589e005" exitCode=0 Jan 21 10:43:43 crc kubenswrapper[4745]: I0121 10:43:43.989311 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57bm9" event={"ID":"ec4cd655-4062-4058-9de3-81d9ebb11d1b","Type":"ContainerStarted","Data":"1742750da022f941137b901b9aa708f88a38c8bd9d4faa9a9801209139814b33"} Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.042284 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="131ae967-4e30-4b48-a2c7-fdcfc1109db8" path="/var/lib/kubelet/pods/131ae967-4e30-4b48-a2c7-fdcfc1109db8/volumes" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.043030 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be69561a-c25a-4e96-b75f-4f5664c5f2c4" path="/var/lib/kubelet/pods/be69561a-c25a-4e96-b75f-4f5664c5f2c4/volumes" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.047201 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed" path="/var/lib/kubelet/pods/d0bfd7b7-bb9e-48a4-b7e1-adf4540831ed/volumes" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.049353 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0e48bf-347d-4985-b809-a25cc11db944" path="/var/lib/kubelet/pods/db0e48bf-347d-4985-b809-a25cc11db944/volumes" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.409238 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-w8f8b"] Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.410718 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.417545 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.433599 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8f8b"] Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.503149 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-catalog-content\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.504036 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-utilities\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.504065 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzx4\" (UniqueName: \"kubernetes.io/projected/ca113352-0f64-44d4-93d8-250df55bef46-kube-api-access-pnzx4\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.606066 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-catalog-content\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.606163 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzx4\" (UniqueName: \"kubernetes.io/projected/ca113352-0f64-44d4-93d8-250df55bef46-kube-api-access-pnzx4\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.606195 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-utilities\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.607631 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-catalog-content\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.607784 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca113352-0f64-44d4-93d8-250df55bef46-utilities\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.610331 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2q52q"] Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.613872 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.617055 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.624974 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q52q"] Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.651780 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzx4\" (UniqueName: \"kubernetes.io/projected/ca113352-0f64-44d4-93d8-250df55bef46-kube-api-access-pnzx4\") pod \"redhat-marketplace-w8f8b\" (UID: \"ca113352-0f64-44d4-93d8-250df55bef46\") " pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.708064 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-catalog-content\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.708184 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/f7d2344c-d406-471d-aafd-7b04d5ed29cf-kube-api-access-66m62\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.708211 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-utilities\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.737779 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.809510 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/f7d2344c-d406-471d-aafd-7b04d5ed29cf-kube-api-access-66m62\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.809604 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-utilities\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.809664 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-catalog-content\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.810228 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-catalog-content\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.810250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f7d2344c-d406-471d-aafd-7b04d5ed29cf-utilities\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.845726 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66m62\" (UniqueName: \"kubernetes.io/projected/f7d2344c-d406-471d-aafd-7b04d5ed29cf-kube-api-access-66m62\") pod \"redhat-operators-2q52q\" (UID: \"f7d2344c-d406-471d-aafd-7b04d5ed29cf\") " pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:44 crc kubenswrapper[4745]: I0121 10:43:44.931756 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:45 crc kubenswrapper[4745]: I0121 10:43:45.004166 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57bm9" event={"ID":"ec4cd655-4062-4058-9de3-81d9ebb11d1b","Type":"ContainerStarted","Data":"ce33826ee62a84265ead86a52a4376fbf66d0f1b8fc895e058ccbd2b53a6b9a3"} Jan 21 10:43:45 crc kubenswrapper[4745]: I0121 10:43:45.203379 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-w8f8b"] Jan 21 10:43:45 crc kubenswrapper[4745]: W0121 10:43:45.210003 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca113352_0f64_44d4_93d8_250df55bef46.slice/crio-573f41607d6dd6b6d957b18a78298e03064dd5a2a51f87be102c64391e1fc25c WatchSource:0}: Error finding container 573f41607d6dd6b6d957b18a78298e03064dd5a2a51f87be102c64391e1fc25c: Status 404 returned error can't find the container with id 573f41607d6dd6b6d957b18a78298e03064dd5a2a51f87be102c64391e1fc25c Jan 21 10:43:45 crc kubenswrapper[4745]: I0121 10:43:45.362506 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2q52q"] Jan 21 10:43:45 crc kubenswrapper[4745]: W0121 10:43:45.371521 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7d2344c_d406_471d_aafd_7b04d5ed29cf.slice/crio-e45d39d9cbac05f34a324ccdaff4e1804f675bda4889d16913bbd57dc6286918 WatchSource:0}: Error finding container e45d39d9cbac05f34a324ccdaff4e1804f675bda4889d16913bbd57dc6286918: Status 404 returned error can't find the container with id e45d39d9cbac05f34a324ccdaff4e1804f675bda4889d16913bbd57dc6286918 Jan 21 10:43:45 crc kubenswrapper[4745]: I0121 10:43:45.866568 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:43:45 crc kubenswrapper[4745]: I0121 10:43:45.866662 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.029760 4745 generic.go:334] "Generic (PLEG): container finished" podID="f7d2344c-d406-471d-aafd-7b04d5ed29cf" containerID="6d82866f9012991e8c570f277bb0956b3a095d2bcadce72fa9fbf125280cb1a8" exitCode=0 Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.030000 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q52q" event={"ID":"f7d2344c-d406-471d-aafd-7b04d5ed29cf","Type":"ContainerDied","Data":"6d82866f9012991e8c570f277bb0956b3a095d2bcadce72fa9fbf125280cb1a8"} Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.030291 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q52q" event={"ID":"f7d2344c-d406-471d-aafd-7b04d5ed29cf","Type":"ContainerStarted","Data":"e45d39d9cbac05f34a324ccdaff4e1804f675bda4889d16913bbd57dc6286918"} Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.039657 4745 generic.go:334] "Generic (PLEG): container finished" podID="ec4cd655-4062-4058-9de3-81d9ebb11d1b" containerID="ce33826ee62a84265ead86a52a4376fbf66d0f1b8fc895e058ccbd2b53a6b9a3" exitCode=0 Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.040508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57bm9" event={"ID":"ec4cd655-4062-4058-9de3-81d9ebb11d1b","Type":"ContainerDied","Data":"ce33826ee62a84265ead86a52a4376fbf66d0f1b8fc895e058ccbd2b53a6b9a3"} Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.047594 4745 generic.go:334] "Generic (PLEG): container finished" podID="ca113352-0f64-44d4-93d8-250df55bef46" containerID="000f8f83ff1b9ef9bc9037e4e8e03100a43c0dc0808f5a500117876f84e7c2c6" exitCode=0 Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.047651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8f8b" event={"ID":"ca113352-0f64-44d4-93d8-250df55bef46","Type":"ContainerDied","Data":"000f8f83ff1b9ef9bc9037e4e8e03100a43c0dc0808f5a500117876f84e7c2c6"} Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.047687 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8f8b" event={"ID":"ca113352-0f64-44d4-93d8-250df55bef46","Type":"ContainerStarted","Data":"573f41607d6dd6b6d957b18a78298e03064dd5a2a51f87be102c64391e1fc25c"} Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.818605 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6nq8p"] Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.819859 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.830339 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.832883 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nq8p"] Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.939561 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-catalog-content\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.939754 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mntld\" (UniqueName: \"kubernetes.io/projected/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-kube-api-access-mntld\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:46 crc kubenswrapper[4745]: I0121 10:43:46.939873 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-utilities\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.041621 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-catalog-content\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.042032 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mntld\" (UniqueName: \"kubernetes.io/projected/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-kube-api-access-mntld\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.042143 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-utilities\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.042315 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-catalog-content\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.042607 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-utilities\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.064557 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mntld\" (UniqueName: \"kubernetes.io/projected/b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1-kube-api-access-mntld\") pod \"community-operators-6nq8p\" (UID: \"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1\") " pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.072461 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q52q" event={"ID":"f7d2344c-d406-471d-aafd-7b04d5ed29cf","Type":"ContainerStarted","Data":"886d35779df81781e9f649410b1f398c71f7f3348b98f4b35c9b46cdf7910d29"} Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.074392 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-57bm9" event={"ID":"ec4cd655-4062-4058-9de3-81d9ebb11d1b","Type":"ContainerStarted","Data":"13545ffd4790e612b3afc9c9114360a6d3e05e806068bc71b238bd34b060ece7"} Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.077759 4745 generic.go:334] "Generic (PLEG): container finished" podID="ca113352-0f64-44d4-93d8-250df55bef46" containerID="930e6486871bea57fff5dc027e1c52095179466ec62a2e9ca471f13170f5834e" exitCode=0 Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.077918 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8f8b" event={"ID":"ca113352-0f64-44d4-93d8-250df55bef46","Type":"ContainerDied","Data":"930e6486871bea57fff5dc027e1c52095179466ec62a2e9ca471f13170f5834e"} Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.132793 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-57bm9" podStartSLOduration=2.688194977 podStartE2EDuration="5.132779066s" podCreationTimestamp="2026-01-21 10:43:42 +0000 UTC" firstStartedPulling="2026-01-21 10:43:43.992694319 +0000 UTC m=+408.453481917" lastFinishedPulling="2026-01-21 10:43:46.437278408 +0000 UTC m=+410.898066006" observedRunningTime="2026-01-21 10:43:47.131215132 +0000 UTC m=+411.592002730" watchObservedRunningTime="2026-01-21 10:43:47.132779066 +0000 UTC m=+411.593566664" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.190954 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:47 crc kubenswrapper[4745]: I0121 10:43:47.650185 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6nq8p"] Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.093722 4745 generic.go:334] "Generic (PLEG): container finished" podID="b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1" containerID="3edd3e00c1e3c5ea498e2c870c9a68efad634ea680304c0b698d4a86d4cf74fe" exitCode=0 Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.094338 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8p" event={"ID":"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1","Type":"ContainerDied","Data":"3edd3e00c1e3c5ea498e2c870c9a68efad634ea680304c0b698d4a86d4cf74fe"} Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.094380 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8p" event={"ID":"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1","Type":"ContainerStarted","Data":"bd7c3012a18668704e8ac4c6834309b2195e4bde723902fbb699d0c753babc7d"} Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.101551 4745 generic.go:334] "Generic (PLEG): container finished" podID="f7d2344c-d406-471d-aafd-7b04d5ed29cf" containerID="886d35779df81781e9f649410b1f398c71f7f3348b98f4b35c9b46cdf7910d29" exitCode=0 Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.101787 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q52q" event={"ID":"f7d2344c-d406-471d-aafd-7b04d5ed29cf","Type":"ContainerDied","Data":"886d35779df81781e9f649410b1f398c71f7f3348b98f4b35c9b46cdf7910d29"} Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.106259 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-w8f8b" event={"ID":"ca113352-0f64-44d4-93d8-250df55bef46","Type":"ContainerStarted","Data":"3196c9ff1c6d1b26c2c6823847477bb0b1ee97d8996d1d01ddfac00386e5ddde"} Jan 21 10:43:48 crc kubenswrapper[4745]: I0121 10:43:48.154730 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-w8f8b" podStartSLOduration=2.464771494 podStartE2EDuration="4.154714413s" podCreationTimestamp="2026-01-21 10:43:44 +0000 UTC" firstStartedPulling="2026-01-21 10:43:46.050475427 +0000 UTC m=+410.511263025" lastFinishedPulling="2026-01-21 10:43:47.740418346 +0000 UTC m=+412.201205944" observedRunningTime="2026-01-21 10:43:48.152965333 +0000 UTC m=+412.613752951" watchObservedRunningTime="2026-01-21 10:43:48.154714413 +0000 UTC m=+412.615502011" Jan 21 10:43:49 crc kubenswrapper[4745]: I0121 10:43:49.114446 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8p" event={"ID":"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1","Type":"ContainerStarted","Data":"2cd52810b143aec40cdf6d8ba0373a6b0505fdc8aab403f334f43e0191528fd5"} Jan 21 10:43:49 crc kubenswrapper[4745]: I0121 10:43:49.121007 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2q52q" event={"ID":"f7d2344c-d406-471d-aafd-7b04d5ed29cf","Type":"ContainerStarted","Data":"207aea0ec0030647fe8bfd5e1ec5c5e06c820f5f2f858bae99ccbdbfe7c6eb0b"} Jan 21 10:43:49 crc kubenswrapper[4745]: I0121 10:43:49.161427 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2q52q" podStartSLOduration=2.56115503 podStartE2EDuration="5.161395819s" podCreationTimestamp="2026-01-21 10:43:44 +0000 UTC" firstStartedPulling="2026-01-21 10:43:46.033005893 +0000 UTC m=+410.493793491" lastFinishedPulling="2026-01-21 10:43:48.633246682 +0000 UTC m=+413.094034280" observedRunningTime="2026-01-21 10:43:49.1593046 +0000 UTC m=+413.620092198" watchObservedRunningTime="2026-01-21 10:43:49.161395819 +0000 UTC m=+413.622183417" Jan 21 10:43:50 crc kubenswrapper[4745]: I0121 10:43:50.128625 4745 generic.go:334] "Generic (PLEG): container finished" podID="b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1" containerID="2cd52810b143aec40cdf6d8ba0373a6b0505fdc8aab403f334f43e0191528fd5" exitCode=0 Jan 21 10:43:50 crc kubenswrapper[4745]: I0121 10:43:50.129647 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8p" event={"ID":"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1","Type":"ContainerDied","Data":"2cd52810b143aec40cdf6d8ba0373a6b0505fdc8aab403f334f43e0191528fd5"} Jan 21 10:43:52 crc kubenswrapper[4745]: I0121 10:43:52.148771 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6nq8p" event={"ID":"b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1","Type":"ContainerStarted","Data":"969185ef87df2b3ae5c002aa8ae04870aa7e0c8483ea10ffdaa837dbc36a3d3a"} Jan 21 10:43:52 crc kubenswrapper[4745]: I0121 10:43:52.169090 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6nq8p" podStartSLOduration=3.190287655 podStartE2EDuration="6.169067901s" podCreationTimestamp="2026-01-21 10:43:46 +0000 UTC" firstStartedPulling="2026-01-21 10:43:48.102014976 +0000 UTC m=+412.562802584" lastFinishedPulling="2026-01-21 10:43:51.080795232 +0000 UTC m=+415.541582830" observedRunningTime="2026-01-21 10:43:52.167751423 +0000 UTC m=+416.628539021" watchObservedRunningTime="2026-01-21 10:43:52.169067901 +0000 UTC m=+416.629855499" Jan 21 10:43:52 crc kubenswrapper[4745]: I0121 10:43:52.575933 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:52 crc kubenswrapper[4745]: I0121 10:43:52.576011 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:52 crc kubenswrapper[4745]: I0121 10:43:52.637511 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:53 crc kubenswrapper[4745]: I0121 10:43:53.199847 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-57bm9" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.739370 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.739443 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.793223 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.932173 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.932271 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:54 crc kubenswrapper[4745]: I0121 10:43:54.984823 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:55 crc kubenswrapper[4745]: I0121 10:43:55.204319 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2q52q" Jan 21 10:43:55 crc kubenswrapper[4745]: I0121 10:43:55.208560 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-w8f8b" Jan 21 10:43:57 crc kubenswrapper[4745]: I0121 10:43:57.192010 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:57 crc kubenswrapper[4745]: I0121 10:43:57.193795 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:57 crc kubenswrapper[4745]: I0121 10:43:57.235862 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:43:57 crc kubenswrapper[4745]: I0121 10:43:57.595992 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerName="registry" containerID="cri-o://74909c1499cbaa004be6a4c17fd4f24aed94532b43269cf62712935c9b072232" gracePeriod=30 Jan 21 10:43:58 crc kubenswrapper[4745]: I0121 10:43:58.231212 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6nq8p" Jan 21 10:44:01 crc kubenswrapper[4745]: I0121 10:44:01.196353 4745 generic.go:334] "Generic (PLEG): container finished" podID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerID="74909c1499cbaa004be6a4c17fd4f24aed94532b43269cf62712935c9b072232" exitCode=0 Jan 21 10:44:01 crc kubenswrapper[4745]: I0121 10:44:01.196447 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" event={"ID":"f9505ea9-d57f-4afa-add9-8e7e9eb84ece","Type":"ContainerDied","Data":"74909c1499cbaa004be6a4c17fd4f24aed94532b43269cf62712935c9b072232"} Jan 21 10:44:01 crc kubenswrapper[4745]: I0121 10:44:01.367272 4745 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-4z5zq container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" start-of-body= Jan 21 10:44:01 crc kubenswrapper[4745]: I0121 10:44:01.367328 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.38:5000/healthz\": dial tcp 10.217.0.38:5000: connect: connection refused" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.021352 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.202856 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.202928 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.202972 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr4gd\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203048 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203215 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203286 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203310 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203360 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token\") pod \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\" (UID: \"f9505ea9-d57f-4afa-add9-8e7e9eb84ece\") " Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203618 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" event={"ID":"f9505ea9-d57f-4afa-add9-8e7e9eb84ece","Type":"ContainerDied","Data":"cfbd0b9070595ba44db8c451050a71ad039fda195bb47a1f5c15ade0580cc54b"} Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203669 4745 scope.go:117] "RemoveContainer" containerID="74909c1499cbaa004be6a4c17fd4f24aed94532b43269cf62712935c9b072232" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.203794 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4z5zq" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.204700 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.204841 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.215471 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd" (OuterVolumeSpecName: "kube-api-access-gr4gd") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "kube-api-access-gr4gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.216124 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.219213 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.222290 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.225361 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.225731 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "f9505ea9-d57f-4afa-add9-8e7e9eb84ece" (UID: "f9505ea9-d57f-4afa-add9-8e7e9eb84ece"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.304955 4745 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305022 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305035 4745 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305044 4745 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305055 4745 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305065 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr4gd\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-kube-api-access-gr4gd\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.305073 4745 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f9505ea9-d57f-4afa-add9-8e7e9eb84ece-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.533298 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:44:02 crc kubenswrapper[4745]: I0121 10:44:02.543129 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4z5zq"] Jan 21 10:44:04 crc kubenswrapper[4745]: I0121 10:44:04.009791 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" path="/var/lib/kubelet/pods/f9505ea9-d57f-4afa-add9-8e7e9eb84ece/volumes" Jan 21 10:44:15 crc kubenswrapper[4745]: I0121 10:44:15.866589 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:44:15 crc kubenswrapper[4745]: I0121 10:44:15.867591 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:44:15 crc kubenswrapper[4745]: I0121 10:44:15.867682 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:44:15 crc kubenswrapper[4745]: I0121 10:44:15.868704 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:44:15 crc kubenswrapper[4745]: I0121 10:44:15.868779 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46" gracePeriod=600 Jan 21 10:44:16 crc kubenswrapper[4745]: I0121 10:44:16.314331 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46" exitCode=0 Jan 21 10:44:16 crc kubenswrapper[4745]: I0121 10:44:16.314513 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46"} Jan 21 10:44:16 crc kubenswrapper[4745]: I0121 10:44:16.314899 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242"} Jan 21 10:44:16 crc kubenswrapper[4745]: I0121 10:44:16.314922 4745 scope.go:117] "RemoveContainer" containerID="0d7486eae2022698d215270d2e8a6d811472b43a03907aa7876c33ea0e24ea7a" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.219976 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5"] Jan 21 10:45:00 crc kubenswrapper[4745]: E0121 10:45:00.223035 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerName="registry" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.223071 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerName="registry" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.223188 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9505ea9-d57f-4afa-add9-8e7e9eb84ece" containerName="registry" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.223861 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.227115 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.229800 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.285912 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5"] Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.414191 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.414355 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4ljr\" (UniqueName: \"kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.414409 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.515326 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4ljr\" (UniqueName: \"kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.515406 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.515440 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.516701 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.524114 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.550429 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4ljr\" (UniqueName: \"kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr\") pod \"collect-profiles-29483205-2pqw5\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:00 crc kubenswrapper[4745]: I0121 10:45:00.845770 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:01 crc kubenswrapper[4745]: I0121 10:45:01.100627 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5"] Jan 21 10:45:02 crc kubenswrapper[4745]: I0121 10:45:02.026389 4745 generic.go:334] "Generic (PLEG): container finished" podID="7db889bc-c207-4047-8b3a-47037f71ac5c" containerID="966810c431842121881a480692de06203687ac5b3410032f16f296f7b5f66ef2" exitCode=0 Jan 21 10:45:02 crc kubenswrapper[4745]: I0121 10:45:02.026449 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" event={"ID":"7db889bc-c207-4047-8b3a-47037f71ac5c","Type":"ContainerDied","Data":"966810c431842121881a480692de06203687ac5b3410032f16f296f7b5f66ef2"} Jan 21 10:45:02 crc kubenswrapper[4745]: I0121 10:45:02.026487 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" event={"ID":"7db889bc-c207-4047-8b3a-47037f71ac5c","Type":"ContainerStarted","Data":"499291399c9800ed6538a1e22475426ae217b3b08b7f8faba4b6f5fb8f1e05e7"} Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.368790 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.469258 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume\") pod \"7db889bc-c207-4047-8b3a-47037f71ac5c\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.469366 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume\") pod \"7db889bc-c207-4047-8b3a-47037f71ac5c\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.469414 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4ljr\" (UniqueName: \"kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr\") pod \"7db889bc-c207-4047-8b3a-47037f71ac5c\" (UID: \"7db889bc-c207-4047-8b3a-47037f71ac5c\") " Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.470957 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "7db889bc-c207-4047-8b3a-47037f71ac5c" (UID: "7db889bc-c207-4047-8b3a-47037f71ac5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.477223 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7db889bc-c207-4047-8b3a-47037f71ac5c" (UID: "7db889bc-c207-4047-8b3a-47037f71ac5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.477078 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr" (OuterVolumeSpecName: "kube-api-access-g4ljr") pod "7db889bc-c207-4047-8b3a-47037f71ac5c" (UID: "7db889bc-c207-4047-8b3a-47037f71ac5c"). InnerVolumeSpecName "kube-api-access-g4ljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.570340 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7db889bc-c207-4047-8b3a-47037f71ac5c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.570684 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7db889bc-c207-4047-8b3a-47037f71ac5c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:03 crc kubenswrapper[4745]: I0121 10:45:03.570773 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4ljr\" (UniqueName: \"kubernetes.io/projected/7db889bc-c207-4047-8b3a-47037f71ac5c-kube-api-access-g4ljr\") on node \"crc\" DevicePath \"\"" Jan 21 10:45:04 crc kubenswrapper[4745]: I0121 10:45:04.041814 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" event={"ID":"7db889bc-c207-4047-8b3a-47037f71ac5c","Type":"ContainerDied","Data":"499291399c9800ed6538a1e22475426ae217b3b08b7f8faba4b6f5fb8f1e05e7"} Jan 21 10:45:04 crc kubenswrapper[4745]: I0121 10:45:04.042482 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="499291399c9800ed6538a1e22475426ae217b3b08b7f8faba4b6f5fb8f1e05e7" Jan 21 10:45:04 crc kubenswrapper[4745]: I0121 10:45:04.041902 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5" Jan 21 10:45:56 crc kubenswrapper[4745]: I0121 10:45:56.367949 4745 scope.go:117] "RemoveContainer" containerID="8d20fa77a126cc4d51508bec0f87f4b17a96d5b784d2ce651ba9c8bed021b1b8" Jan 21 10:46:45 crc kubenswrapper[4745]: I0121 10:46:45.866861 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:46:45 crc kubenswrapper[4745]: I0121 10:46:45.867793 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:46:56 crc kubenswrapper[4745]: I0121 10:46:56.408970 4745 scope.go:117] "RemoveContainer" containerID="8f5a903f3ffc943bd0071f511ba8272b534d9d33537b5ae029d8873e5af70599" Jan 21 10:46:58 crc kubenswrapper[4745]: I0121 10:46:58.308792 4745 scope.go:117] "RemoveContainer" containerID="7db6e1fcec09e6a97879daae2ea7f9aa33b8e7b0282dec6c5a7c0959245d9e4b" Jan 21 10:47:15 crc kubenswrapper[4745]: I0121 10:47:15.866621 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:47:15 crc kubenswrapper[4745]: I0121 10:47:15.867615 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:47:45 crc kubenswrapper[4745]: I0121 10:47:45.866833 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:47:45 crc kubenswrapper[4745]: I0121 10:47:45.867913 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:47:45 crc kubenswrapper[4745]: I0121 10:47:45.868006 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:47:45 crc kubenswrapper[4745]: I0121 10:47:45.868979 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:47:45 crc kubenswrapper[4745]: I0121 10:47:45.869060 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242" gracePeriod=600 Jan 21 10:47:47 crc kubenswrapper[4745]: I0121 10:47:47.122438 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242" exitCode=0 Jan 21 10:47:47 crc kubenswrapper[4745]: I0121 10:47:47.123320 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242"} Jan 21 10:47:47 crc kubenswrapper[4745]: I0121 10:47:47.123371 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3"} Jan 21 10:47:47 crc kubenswrapper[4745]: I0121 10:47:47.123400 4745 scope.go:117] "RemoveContainer" containerID="3df09e71c9d2707ec57491f50eb014c05e1cb37d897939e30ac06524ed542e46" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.304925 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2"] Jan 21 10:49:09 crc kubenswrapper[4745]: E0121 10:49:09.306122 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db889bc-c207-4047-8b3a-47037f71ac5c" containerName="collect-profiles" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.306142 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db889bc-c207-4047-8b3a-47037f71ac5c" containerName="collect-profiles" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.306265 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db889bc-c207-4047-8b3a-47037f71ac5c" containerName="collect-profiles" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.306872 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.310304 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.310446 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.311601 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-94qd5" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.320591 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.352733 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-s5t4j"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.353860 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-s5t4j" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.357863 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-gsfb2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.371049 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xg5s"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.379855 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-s5t4j"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.380332 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.384286 4745 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-trbnr" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.400088 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xg5s"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.449414 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtq8c\" (UniqueName: \"kubernetes.io/projected/6f55bdba-45e5-485d-ae8f-a8576885b3ff-kube-api-access-gtq8c\") pod \"cert-manager-cainjector-cf98fcc89-rgtt2\" (UID: \"6f55bdba-45e5-485d-ae8f-a8576885b3ff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.551630 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfwqj\" (UniqueName: \"kubernetes.io/projected/28ac8429-55e4-4387-99d2-f20e654f0dde-kube-api-access-rfwqj\") pod \"cert-manager-webhook-687f57d79b-7xg5s\" (UID: \"28ac8429-55e4-4387-99d2-f20e654f0dde\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.551721 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtq8c\" (UniqueName: \"kubernetes.io/projected/6f55bdba-45e5-485d-ae8f-a8576885b3ff-kube-api-access-gtq8c\") pod \"cert-manager-cainjector-cf98fcc89-rgtt2\" (UID: \"6f55bdba-45e5-485d-ae8f-a8576885b3ff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.551791 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7wb8\" (UniqueName: \"kubernetes.io/projected/60b550eb-7b13-4042-99c2-70f21e9ec81f-kube-api-access-h7wb8\") pod \"cert-manager-858654f9db-s5t4j\" (UID: \"60b550eb-7b13-4042-99c2-70f21e9ec81f\") " pod="cert-manager/cert-manager-858654f9db-s5t4j" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.574841 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtq8c\" (UniqueName: \"kubernetes.io/projected/6f55bdba-45e5-485d-ae8f-a8576885b3ff-kube-api-access-gtq8c\") pod \"cert-manager-cainjector-cf98fcc89-rgtt2\" (UID: \"6f55bdba-45e5-485d-ae8f-a8576885b3ff\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.638208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.653855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfwqj\" (UniqueName: \"kubernetes.io/projected/28ac8429-55e4-4387-99d2-f20e654f0dde-kube-api-access-rfwqj\") pod \"cert-manager-webhook-687f57d79b-7xg5s\" (UID: \"28ac8429-55e4-4387-99d2-f20e654f0dde\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.653959 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7wb8\" (UniqueName: \"kubernetes.io/projected/60b550eb-7b13-4042-99c2-70f21e9ec81f-kube-api-access-h7wb8\") pod \"cert-manager-858654f9db-s5t4j\" (UID: \"60b550eb-7b13-4042-99c2-70f21e9ec81f\") " pod="cert-manager/cert-manager-858654f9db-s5t4j" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.677272 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7wb8\" (UniqueName: \"kubernetes.io/projected/60b550eb-7b13-4042-99c2-70f21e9ec81f-kube-api-access-h7wb8\") pod \"cert-manager-858654f9db-s5t4j\" (UID: \"60b550eb-7b13-4042-99c2-70f21e9ec81f\") " pod="cert-manager/cert-manager-858654f9db-s5t4j" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.689555 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfwqj\" (UniqueName: \"kubernetes.io/projected/28ac8429-55e4-4387-99d2-f20e654f0dde-kube-api-access-rfwqj\") pod \"cert-manager-webhook-687f57d79b-7xg5s\" (UID: \"28ac8429-55e4-4387-99d2-f20e654f0dde\") " pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.700176 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.915698 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2"] Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.930359 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:49:09 crc kubenswrapper[4745]: I0121 10:49:09.974088 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-s5t4j" Jan 21 10:49:10 crc kubenswrapper[4745]: W0121 10:49:10.006347 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28ac8429_55e4_4387_99d2_f20e654f0dde.slice/crio-797a44e7db8876a57b3b1c72ebc5f76b2c075dffa4b6f6b44dc56b90928b4be2 WatchSource:0}: Error finding container 797a44e7db8876a57b3b1c72ebc5f76b2c075dffa4b6f6b44dc56b90928b4be2: Status 404 returned error can't find the container with id 797a44e7db8876a57b3b1c72ebc5f76b2c075dffa4b6f6b44dc56b90928b4be2 Jan 21 10:49:10 crc kubenswrapper[4745]: I0121 10:49:10.007791 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-7xg5s"] Jan 21 10:49:10 crc kubenswrapper[4745]: I0121 10:49:10.207851 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-s5t4j"] Jan 21 10:49:10 crc kubenswrapper[4745]: W0121 10:49:10.212509 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60b550eb_7b13_4042_99c2_70f21e9ec81f.slice/crio-ec7a6e81e36824b666814daf86e2f369603a2165260057b958e84d75c700c7b6 WatchSource:0}: Error finding container ec7a6e81e36824b666814daf86e2f369603a2165260057b958e84d75c700c7b6: Status 404 returned error can't find the container with id ec7a6e81e36824b666814daf86e2f369603a2165260057b958e84d75c700c7b6 Jan 21 10:49:10 crc kubenswrapper[4745]: I0121 10:49:10.669707 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" event={"ID":"28ac8429-55e4-4387-99d2-f20e654f0dde","Type":"ContainerStarted","Data":"797a44e7db8876a57b3b1c72ebc5f76b2c075dffa4b6f6b44dc56b90928b4be2"} Jan 21 10:49:10 crc kubenswrapper[4745]: I0121 10:49:10.673297 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" event={"ID":"6f55bdba-45e5-485d-ae8f-a8576885b3ff","Type":"ContainerStarted","Data":"3bee18b10b9b9a83f8d6108b68fa197888cec4f7e889b0fb76b1f567daaaf829"} Jan 21 10:49:10 crc kubenswrapper[4745]: I0121 10:49:10.674476 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-s5t4j" event={"ID":"60b550eb-7b13-4042-99c2-70f21e9ec81f","Type":"ContainerStarted","Data":"ec7a6e81e36824b666814daf86e2f369603a2165260057b958e84d75c700c7b6"} Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.719700 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l7mcj"] Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.721215 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-controller" containerID="cri-o://869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.721694 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="northd" containerID="cri-o://11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.721956 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="sbdb" containerID="cri-o://d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.722032 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="nbdb" containerID="cri-o://88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.722816 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.722886 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-node" containerID="cri-o://8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.722923 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-acl-logging" containerID="cri-o://4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" gracePeriod=30 Jan 21 10:49:18 crc kubenswrapper[4745]: I0121 10:49:18.784049 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" containerID="cri-o://a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" gracePeriod=30 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.306151 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/3.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.309743 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-acl-logging/0.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.310281 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-controller/0.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.310979 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.385994 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-sph9g"] Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386354 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="sbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386379 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="sbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386399 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="northd" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386408 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="northd" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386422 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kubecfg-setup" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386430 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kubecfg-setup" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386440 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386448 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386462 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386470 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386485 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-node" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386499 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-node" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386514 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-acl-logging" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386543 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-acl-logging" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386554 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386562 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386574 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386581 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386592 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386600 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386614 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386623 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.386633 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="nbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386642 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="nbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386781 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="sbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386800 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-ovn-metrics" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386822 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386832 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386840 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386848 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386856 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="northd" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386866 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386877 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovn-acl-logging" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386888 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="nbdb" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.386898 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="kube-rbac-proxy-node" Jan 21 10:49:19 crc kubenswrapper[4745]: E0121 10:49:19.387038 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.387048 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.387166 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerName="ovnkube-controller" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.412820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511484 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511569 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511553 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511623 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511880 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.511918 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512114 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512186 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512198 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512221 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512248 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512252 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash" (OuterVolumeSpecName: "host-slash") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512304 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512357 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512382 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512401 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512418 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512450 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512470 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512497 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512524 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512572 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512635 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512679 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512571 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log" (OuterVolumeSpecName: "node-log") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512601 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512636 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512662 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512686 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512737 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512710 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf85x\" (UniqueName: \"kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x\") pod \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\" (UID: \"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed\") " Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512816 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512861 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket" (OuterVolumeSpecName: "log-socket") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512856 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.512884 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513102 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dr2q\" (UniqueName: \"kubernetes.io/projected/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-kube-api-access-7dr2q\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513143 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-bin\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513172 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-config\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513202 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513239 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-script-lib\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513259 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513283 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-node-log\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513306 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-var-lib-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513334 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-netns\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513357 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-slash\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513393 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovn-node-metrics-cert\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-etc-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513450 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-ovn\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513471 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-log-socket\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513499 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513522 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-netd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513560 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-systemd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513607 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-env-overrides\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513637 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-systemd-units\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513672 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-kubelet\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513939 4745 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513972 4745 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.513988 4745 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514003 4745 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514017 4745 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514028 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514040 4745 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-slash\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514054 4745 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514066 4745 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514076 4745 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-node-log\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514089 4745 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514101 4745 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514113 4745 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514124 4745 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514135 4745 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514148 4745 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.514161 4745 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-log-socket\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.522213 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.522212 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x" (OuterVolumeSpecName: "kube-api-access-xf85x") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "kube-api-access-xf85x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.527935 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" (UID: "04dff8d4-15bb-4f8e-b71a-bb104f6de3ed"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615361 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-ovn\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615416 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-log-socket\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615465 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-netd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615490 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-systemd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615515 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-env-overrides\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615548 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-systemd-units\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615575 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-kubelet\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dr2q\" (UniqueName: \"kubernetes.io/projected/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-kube-api-access-7dr2q\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-bin\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615722 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-config\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615749 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-script-lib\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615785 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615808 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615830 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-node-log\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-var-lib-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615877 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-netns\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-slash\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615935 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovn-node-metrics-cert\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-etc-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.615995 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616009 4745 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616019 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf85x\" (UniqueName: \"kubernetes.io/projected/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed-kube-api-access-xf85x\") on node \"crc\" DevicePath \"\"" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616081 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-etc-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616126 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-ovn\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616147 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-log-socket\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616167 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616194 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-netd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.616222 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-systemd\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.617001 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-env-overrides\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.617046 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-systemd-units\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.617073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-kubelet\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.617451 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-cni-bin\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618164 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-config\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618697 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovnkube-script-lib\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618741 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-run-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618767 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618789 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-node-log\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618813 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-var-lib-openvswitch\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.618924 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-run-netns\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.619020 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-host-slash\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.632397 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-ovn-node-metrics-cert\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.634819 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dr2q\" (UniqueName: \"kubernetes.io/projected/ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3-kube-api-access-7dr2q\") pod \"ovnkube-node-sph9g\" (UID: \"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3\") " pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.733798 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.742732 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/2.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.743331 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/1.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.743390 4745 generic.go:334] "Generic (PLEG): container finished" podID="25458900-3da2-4c9d-8463-9acde2add0a6" containerID="655ed50b4ec230b78b2634d5bb83e158e7df4aea82278fb856a0f0f490e5d178" exitCode=2 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.743470 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerDied","Data":"655ed50b4ec230b78b2634d5bb83e158e7df4aea82278fb856a0f0f490e5d178"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.743544 4745 scope.go:117] "RemoveContainer" containerID="714407a10230aa649925c34cef574bad9510d3268300bcb3dadaba7c6bc9d9a7" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.744310 4745 scope.go:117] "RemoveContainer" containerID="655ed50b4ec230b78b2634d5bb83e158e7df4aea82278fb856a0f0f490e5d178" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.748974 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovnkube-controller/3.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.752011 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-acl-logging/0.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.752570 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-controller/0.log" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.752970 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.752999 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753010 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753020 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753029 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753036 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" exitCode=0 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753045 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" exitCode=143 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753054 4745 generic.go:334] "Generic (PLEG): container finished" podID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" exitCode=143 Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753081 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753114 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753132 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753145 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753158 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753169 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753185 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753203 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753210 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753217 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753224 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753230 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753237 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753244 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753250 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753257 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753276 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753285 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753291 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753298 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753305 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753312 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753319 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753326 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753332 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753339 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753348 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753358 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753367 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753375 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753381 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753387 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753394 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753401 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753407 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753414 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753421 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753429 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" event={"ID":"04dff8d4-15bb-4f8e-b71a-bb104f6de3ed","Type":"ContainerDied","Data":"d5accf1adb2c50c251ba04041bcd212e05c044118907d8628c7daa54af5b84ed"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753439 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753447 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753454 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753461 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753468 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753477 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753484 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753491 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753497 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753503 4745 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.753678 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l7mcj" Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.798993 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l7mcj"] Jan 21 10:49:19 crc kubenswrapper[4745]: I0121 10:49:19.802470 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l7mcj"] Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.010875 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04dff8d4-15bb-4f8e-b71a-bb104f6de3ed" path="/var/lib/kubelet/pods/04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/volumes" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.650659 4745 scope.go:117] "RemoveContainer" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.705455 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.741729 4745 scope.go:117] "RemoveContainer" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.777364 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/2.log" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.778781 4745 scope.go:117] "RemoveContainer" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.781262 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"dd3aba121a05b726cc791ff995860c02c570c4cfb0098f9ff6b9dac19e5815e4"} Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.796724 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-acl-logging/0.log" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.799101 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l7mcj_04dff8d4-15bb-4f8e-b71a-bb104f6de3ed/ovn-controller/0.log" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.824954 4745 scope.go:117] "RemoveContainer" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.880620 4745 scope.go:117] "RemoveContainer" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.937601 4745 scope.go:117] "RemoveContainer" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.969106 4745 scope.go:117] "RemoveContainer" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:20 crc kubenswrapper[4745]: I0121 10:49:20.991456 4745 scope.go:117] "RemoveContainer" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.017993 4745 scope.go:117] "RemoveContainer" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.748967 4745 scope.go:117] "RemoveContainer" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.750434 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": container with ID starting with a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c not found: ID does not exist" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.750506 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} err="failed to get container status \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": rpc error: code = NotFound desc = could not find container \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": container with ID starting with a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.750572 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.750939 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": container with ID starting with e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25 not found: ID does not exist" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.750971 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} err="failed to get container status \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": rpc error: code = NotFound desc = could not find container \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": container with ID starting with e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.750990 4745 scope.go:117] "RemoveContainer" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.751302 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": container with ID starting with d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2 not found: ID does not exist" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.751337 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} err="failed to get container status \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": rpc error: code = NotFound desc = could not find container \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": container with ID starting with d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.751357 4745 scope.go:117] "RemoveContainer" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.752063 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": container with ID starting with 88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6 not found: ID does not exist" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.752119 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} err="failed to get container status \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": rpc error: code = NotFound desc = could not find container \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": container with ID starting with 88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.752161 4745 scope.go:117] "RemoveContainer" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.752853 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": container with ID starting with 11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17 not found: ID does not exist" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.752883 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} err="failed to get container status \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": rpc error: code = NotFound desc = could not find container \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": container with ID starting with 11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.752900 4745 scope.go:117] "RemoveContainer" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.754226 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": container with ID starting with 76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65 not found: ID does not exist" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.754284 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} err="failed to get container status \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": rpc error: code = NotFound desc = could not find container \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": container with ID starting with 76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.754339 4745 scope.go:117] "RemoveContainer" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.755166 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": container with ID starting with 8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51 not found: ID does not exist" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.755213 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} err="failed to get container status \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": rpc error: code = NotFound desc = could not find container \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": container with ID starting with 8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.755240 4745 scope.go:117] "RemoveContainer" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.756213 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": container with ID starting with 4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af not found: ID does not exist" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.756251 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} err="failed to get container status \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": rpc error: code = NotFound desc = could not find container \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": container with ID starting with 4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.756277 4745 scope.go:117] "RemoveContainer" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.756671 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": container with ID starting with 869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118 not found: ID does not exist" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.756771 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} err="failed to get container status \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": rpc error: code = NotFound desc = could not find container \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": container with ID starting with 869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.756868 4745 scope.go:117] "RemoveContainer" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: E0121 10:49:21.757279 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": container with ID starting with 36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab not found: ID does not exist" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.757319 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} err="failed to get container status \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": rpc error: code = NotFound desc = could not find container \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": container with ID starting with 36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.757355 4745 scope.go:117] "RemoveContainer" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.757700 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} err="failed to get container status \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": rpc error: code = NotFound desc = could not find container \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": container with ID starting with a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.757730 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758087 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} err="failed to get container status \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": rpc error: code = NotFound desc = could not find container \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": container with ID starting with e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758115 4745 scope.go:117] "RemoveContainer" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758358 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} err="failed to get container status \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": rpc error: code = NotFound desc = could not find container \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": container with ID starting with d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758379 4745 scope.go:117] "RemoveContainer" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758749 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} err="failed to get container status \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": rpc error: code = NotFound desc = could not find container \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": container with ID starting with 88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.758787 4745 scope.go:117] "RemoveContainer" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759039 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} err="failed to get container status \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": rpc error: code = NotFound desc = could not find container \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": container with ID starting with 11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759062 4745 scope.go:117] "RemoveContainer" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759280 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} err="failed to get container status \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": rpc error: code = NotFound desc = could not find container \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": container with ID starting with 76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759303 4745 scope.go:117] "RemoveContainer" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759635 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} err="failed to get container status \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": rpc error: code = NotFound desc = could not find container \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": container with ID starting with 8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759657 4745 scope.go:117] "RemoveContainer" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759903 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} err="failed to get container status \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": rpc error: code = NotFound desc = could not find container \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": container with ID starting with 4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.759978 4745 scope.go:117] "RemoveContainer" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.760314 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} err="failed to get container status \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": rpc error: code = NotFound desc = could not find container \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": container with ID starting with 869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.760338 4745 scope.go:117] "RemoveContainer" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.760649 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} err="failed to get container status \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": rpc error: code = NotFound desc = could not find container \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": container with ID starting with 36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.760730 4745 scope.go:117] "RemoveContainer" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761066 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} err="failed to get container status \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": rpc error: code = NotFound desc = could not find container \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": container with ID starting with a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761134 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761547 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} err="failed to get container status \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": rpc error: code = NotFound desc = could not find container \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": container with ID starting with e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761614 4745 scope.go:117] "RemoveContainer" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761902 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} err="failed to get container status \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": rpc error: code = NotFound desc = could not find container \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": container with ID starting with d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.761934 4745 scope.go:117] "RemoveContainer" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.762280 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} err="failed to get container status \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": rpc error: code = NotFound desc = could not find container \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": container with ID starting with 88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.762311 4745 scope.go:117] "RemoveContainer" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.762604 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} err="failed to get container status \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": rpc error: code = NotFound desc = could not find container \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": container with ID starting with 11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.762676 4745 scope.go:117] "RemoveContainer" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.762979 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} err="failed to get container status \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": rpc error: code = NotFound desc = could not find container \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": container with ID starting with 76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.763007 4745 scope.go:117] "RemoveContainer" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.763240 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} err="failed to get container status \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": rpc error: code = NotFound desc = could not find container \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": container with ID starting with 8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.763265 4745 scope.go:117] "RemoveContainer" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.763567 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} err="failed to get container status \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": rpc error: code = NotFound desc = could not find container \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": container with ID starting with 4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.763595 4745 scope.go:117] "RemoveContainer" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764135 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} err="failed to get container status \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": rpc error: code = NotFound desc = could not find container \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": container with ID starting with 869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764174 4745 scope.go:117] "RemoveContainer" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764457 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} err="failed to get container status \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": rpc error: code = NotFound desc = could not find container \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": container with ID starting with 36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764479 4745 scope.go:117] "RemoveContainer" containerID="a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764812 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c"} err="failed to get container status \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": rpc error: code = NotFound desc = could not find container \"a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c\": container with ID starting with a768ebad7e84f63376a05564c06c6361fb9beb60afcc79fb4c897182cc4e803c not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.764845 4745 scope.go:117] "RemoveContainer" containerID="e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765170 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25"} err="failed to get container status \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": rpc error: code = NotFound desc = could not find container \"e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25\": container with ID starting with e1eca3449e4692110c6d590f985b49bad6af1cb1bb15079d99e3796568db6c25 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765245 4745 scope.go:117] "RemoveContainer" containerID="d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765694 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2"} err="failed to get container status \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": rpc error: code = NotFound desc = could not find container \"d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2\": container with ID starting with d16904ca4bc928036d7fc62a9c8c6817b52d97199df2b30ce87a019e70cd28b2 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765721 4745 scope.go:117] "RemoveContainer" containerID="88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765958 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6"} err="failed to get container status \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": rpc error: code = NotFound desc = could not find container \"88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6\": container with ID starting with 88a9deb2a0219001ef29e4e4e54e3082f768b2b22bab1de88023c65baf0973b6 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.765980 4745 scope.go:117] "RemoveContainer" containerID="11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.766197 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17"} err="failed to get container status \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": rpc error: code = NotFound desc = could not find container \"11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17\": container with ID starting with 11052c079eb6fd194069d7916ba47bfe89acdccd251072db27ffa59fd974ea17 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.766219 4745 scope.go:117] "RemoveContainer" containerID="76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.766623 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65"} err="failed to get container status \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": rpc error: code = NotFound desc = could not find container \"76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65\": container with ID starting with 76f40d9854895e4b0b2d917283dffb3c1abfb03e11caef7ecbb3ba0ac960ed65 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.766725 4745 scope.go:117] "RemoveContainer" containerID="8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767085 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51"} err="failed to get container status \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": rpc error: code = NotFound desc = could not find container \"8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51\": container with ID starting with 8b67ea606d2dc597aa3526822527d99d20a26e4c1cb9f5f4c26f938e9c7e7b51 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767109 4745 scope.go:117] "RemoveContainer" containerID="4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767309 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af"} err="failed to get container status \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": rpc error: code = NotFound desc = could not find container \"4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af\": container with ID starting with 4537b1b74c5e005bc539a7bf91c7cecfba33fca1064e783aeb27e3c99c3a68af not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767337 4745 scope.go:117] "RemoveContainer" containerID="869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767646 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118"} err="failed to get container status \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": rpc error: code = NotFound desc = could not find container \"869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118\": container with ID starting with 869879b0c17f89ebb85b2ef5adf775886f151823c04f1c6aa9f8e2811c16d118 not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.767724 4745 scope.go:117] "RemoveContainer" containerID="36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.768054 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab"} err="failed to get container status \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": rpc error: code = NotFound desc = could not find container \"36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab\": container with ID starting with 36e14c10bba28f6c8fa4c9dd8910f40faeaba7db1802c8e8d476c83f94ef45ab not found: ID does not exist" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.812557 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" event={"ID":"6f55bdba-45e5-485d-ae8f-a8576885b3ff","Type":"ContainerStarted","Data":"afc5f9cb833e144c556b284d913b8ef8d5cff4591809413f3ff0f1e211b232bf"} Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.823959 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/2.log" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.824709 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p8q45" event={"ID":"25458900-3da2-4c9d-8463-9acde2add0a6","Type":"ContainerStarted","Data":"244d6485576e8fd2c7ff135d11f403e2c06c9bcd959c2f9504b56536033ce3db"} Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.827338 4745 generic.go:334] "Generic (PLEG): container finished" podID="ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3" containerID="59f9e81f9302011d99a3cc82170ee7c6b45f7c29fd82ae75740daac4812c1e68" exitCode=0 Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.827436 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerDied","Data":"59f9e81f9302011d99a3cc82170ee7c6b45f7c29fd82ae75740daac4812c1e68"} Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.831490 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" event={"ID":"28ac8429-55e4-4387-99d2-f20e654f0dde","Type":"ContainerStarted","Data":"5ce406742d7d9b7a966944c30b828c54ce5a4d59240781a3a7bf4ccf035420a9"} Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.832571 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.846059 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-rgtt2" podStartSLOduration=2.066517016 podStartE2EDuration="12.846023662s" podCreationTimestamp="2026-01-21 10:49:09 +0000 UTC" firstStartedPulling="2026-01-21 10:49:09.930116849 +0000 UTC m=+734.390904447" lastFinishedPulling="2026-01-21 10:49:20.709623495 +0000 UTC m=+745.170411093" observedRunningTime="2026-01-21 10:49:21.838117179 +0000 UTC m=+746.298904777" watchObservedRunningTime="2026-01-21 10:49:21.846023662 +0000 UTC m=+746.306811260" Jan 21 10:49:21 crc kubenswrapper[4745]: I0121 10:49:21.910519 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" podStartSLOduration=2.277915114 podStartE2EDuration="12.910486655s" podCreationTimestamp="2026-01-21 10:49:09 +0000 UTC" firstStartedPulling="2026-01-21 10:49:10.009090961 +0000 UTC m=+734.469878569" lastFinishedPulling="2026-01-21 10:49:20.641662512 +0000 UTC m=+745.102450110" observedRunningTime="2026-01-21 10:49:21.902774719 +0000 UTC m=+746.363562317" watchObservedRunningTime="2026-01-21 10:49:21.910486655 +0000 UTC m=+746.371274253" Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.846755 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"5f76e96bdd0ed2138d727818056bda2fcd4f05ed23a405b285ee840c5f289521"} Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.848141 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"4929c9bd6724765a583b1023590de7a80bc2daf69a3676cb542985d4fed1c154"} Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.848162 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"ec51682c18318f17332815f38bda165d14f1d3945f1d314d9e64c8bdf48f2811"} Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.848174 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"dfdebe1aa1a62bb2b8952eda23d3e86f61bbf3fcf9bb163a423bf03a8286d4ac"} Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.852047 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-s5t4j" event={"ID":"60b550eb-7b13-4042-99c2-70f21e9ec81f","Type":"ContainerStarted","Data":"601828d18867b5552adbdb890933227b36c84ce957475c57e525fd08022aedb0"} Jan 21 10:49:22 crc kubenswrapper[4745]: I0121 10:49:22.878558 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-s5t4j" podStartSLOduration=2.166917891 podStartE2EDuration="13.878509837s" podCreationTimestamp="2026-01-21 10:49:09 +0000 UTC" firstStartedPulling="2026-01-21 10:49:10.21561083 +0000 UTC m=+734.676398438" lastFinishedPulling="2026-01-21 10:49:21.927202786 +0000 UTC m=+746.387990384" observedRunningTime="2026-01-21 10:49:22.870658756 +0000 UTC m=+747.331446364" watchObservedRunningTime="2026-01-21 10:49:22.878509837 +0000 UTC m=+747.339297435" Jan 21 10:49:23 crc kubenswrapper[4745]: I0121 10:49:23.861171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"cb79b4080bb8ada00976aab8463f27271ae00aa5ffb3fe019e90c3f345cb3e3b"} Jan 21 10:49:23 crc kubenswrapper[4745]: I0121 10:49:23.861243 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"28b31f56625a8e3e2aa5763a3bbc6431ba0de4c1eab18bdf286dc2a21049d54b"} Jan 21 10:49:26 crc kubenswrapper[4745]: I0121 10:49:26.892931 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"3af51558926bfa5c080db7081431619efa5cb98bfc459d5192749c28270c5532"} Jan 21 10:49:29 crc kubenswrapper[4745]: I0121 10:49:29.704662 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-7xg5s" Jan 21 10:49:29 crc kubenswrapper[4745]: I0121 10:49:29.921049 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" event={"ID":"ea4ab6dd-fe5f-4741-b606-15aa8b3b50c3","Type":"ContainerStarted","Data":"b6f9ae84d80e2cdbc2c382f0baf0272f62fe558dcb8417c497423f67091dffec"} Jan 21 10:49:29 crc kubenswrapper[4745]: I0121 10:49:29.921680 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:29 crc kubenswrapper[4745]: I0121 10:49:29.954519 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:29 crc kubenswrapper[4745]: I0121 10:49:29.958509 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" podStartSLOduration=10.958474922 podStartE2EDuration="10.958474922s" podCreationTimestamp="2026-01-21 10:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:49:29.954292814 +0000 UTC m=+754.415080422" watchObservedRunningTime="2026-01-21 10:49:29.958474922 +0000 UTC m=+754.419262540" Jan 21 10:49:30 crc kubenswrapper[4745]: I0121 10:49:30.927996 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:30 crc kubenswrapper[4745]: I0121 10:49:30.928100 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:30 crc kubenswrapper[4745]: I0121 10:49:30.969361 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:49:38 crc kubenswrapper[4745]: I0121 10:49:38.009949 4745 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 21 10:49:49 crc kubenswrapper[4745]: I0121 10:49:49.761809 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-sph9g" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.028825 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk"] Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.030739 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.033505 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.041512 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk"] Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.125321 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clp5j\" (UniqueName: \"kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.125421 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.125470 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.226813 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clp5j\" (UniqueName: \"kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.226876 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.226904 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.227391 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.227570 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.246101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clp5j\" (UniqueName: \"kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.389872 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:13 crc kubenswrapper[4745]: I0121 10:50:13.790503 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk"] Jan 21 10:50:14 crc kubenswrapper[4745]: I0121 10:50:14.195543 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerStarted","Data":"7febb09245b7f04c87338e87628a064651c1f3004c7fd89163edbf6637d8566c"} Jan 21 10:50:14 crc kubenswrapper[4745]: I0121 10:50:14.196762 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerStarted","Data":"9610e3feec973abde605ba131889cdb1b03066953f5a8546449bd8759012b7f9"} Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.206987 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerDied","Data":"7febb09245b7f04c87338e87628a064651c1f3004c7fd89163edbf6637d8566c"} Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.206913 4745 generic.go:334] "Generic (PLEG): container finished" podID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerID="7febb09245b7f04c87338e87628a064651c1f3004c7fd89163edbf6637d8566c" exitCode=0 Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.353453 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.354801 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.378412 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.458079 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.458158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.458199 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spw6l\" (UniqueName: \"kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.559785 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.559901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spw6l\" (UniqueName: \"kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.560002 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.560590 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.560963 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.586475 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spw6l\" (UniqueName: \"kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l\") pod \"redhat-operators-v8dqv\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.671929 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.866722 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:50:15 crc kubenswrapper[4745]: I0121 10:50:15.867290 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:50:16 crc kubenswrapper[4745]: I0121 10:50:16.201128 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:16 crc kubenswrapper[4745]: W0121 10:50:16.206830 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77188672_dc9b_4158_a11d_d72ebfb12310.slice/crio-0ebd9dfacbf398d3db1daf928a569fb0418322d18c4df959cc2e172ced3028aa WatchSource:0}: Error finding container 0ebd9dfacbf398d3db1daf928a569fb0418322d18c4df959cc2e172ced3028aa: Status 404 returned error can't find the container with id 0ebd9dfacbf398d3db1daf928a569fb0418322d18c4df959cc2e172ced3028aa Jan 21 10:50:16 crc kubenswrapper[4745]: I0121 10:50:16.218139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerStarted","Data":"0ebd9dfacbf398d3db1daf928a569fb0418322d18c4df959cc2e172ced3028aa"} Jan 21 10:50:17 crc kubenswrapper[4745]: I0121 10:50:17.230126 4745 generic.go:334] "Generic (PLEG): container finished" podID="77188672-dc9b-4158-a11d-d72ebfb12310" containerID="4aa404a100dc6986a6fdb6d764bdbf3fbd0e46c853dcdfc15f37d8294a5781fe" exitCode=0 Jan 21 10:50:17 crc kubenswrapper[4745]: I0121 10:50:17.230256 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerDied","Data":"4aa404a100dc6986a6fdb6d764bdbf3fbd0e46c853dcdfc15f37d8294a5781fe"} Jan 21 10:50:19 crc kubenswrapper[4745]: I0121 10:50:19.250245 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerStarted","Data":"9b9c08cc491004b3ac584f1c495c3a552d75535cb2c638d0fe4ad7054593700a"} Jan 21 10:50:20 crc kubenswrapper[4745]: I0121 10:50:20.260088 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerStarted","Data":"0d9e3e16ddeb16cc69582e1b79875c06f4e82d95ba892919a1e8055306f87e99"} Jan 21 10:50:20 crc kubenswrapper[4745]: I0121 10:50:20.262611 4745 generic.go:334] "Generic (PLEG): container finished" podID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerID="9b9c08cc491004b3ac584f1c495c3a552d75535cb2c638d0fe4ad7054593700a" exitCode=0 Jan 21 10:50:20 crc kubenswrapper[4745]: I0121 10:50:20.262661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerDied","Data":"9b9c08cc491004b3ac584f1c495c3a552d75535cb2c638d0fe4ad7054593700a"} Jan 21 10:50:21 crc kubenswrapper[4745]: I0121 10:50:21.275455 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerStarted","Data":"fdb33d75687ded1a7af260549dc87cadd235fe0e28e29c2179d234bf9fb337dd"} Jan 21 10:50:21 crc kubenswrapper[4745]: I0121 10:50:21.323564 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" podStartSLOduration=7.044472588 podStartE2EDuration="8.323517818s" podCreationTimestamp="2026-01-21 10:50:13 +0000 UTC" firstStartedPulling="2026-01-21 10:50:15.211116572 +0000 UTC m=+799.671904170" lastFinishedPulling="2026-01-21 10:50:16.490161802 +0000 UTC m=+800.950949400" observedRunningTime="2026-01-21 10:50:21.323321023 +0000 UTC m=+805.784108621" watchObservedRunningTime="2026-01-21 10:50:21.323517818 +0000 UTC m=+805.784305416" Jan 21 10:50:22 crc kubenswrapper[4745]: I0121 10:50:22.285634 4745 generic.go:334] "Generic (PLEG): container finished" podID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerID="fdb33d75687ded1a7af260549dc87cadd235fe0e28e29c2179d234bf9fb337dd" exitCode=0 Jan 21 10:50:22 crc kubenswrapper[4745]: I0121 10:50:22.285790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerDied","Data":"fdb33d75687ded1a7af260549dc87cadd235fe0e28e29c2179d234bf9fb337dd"} Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.711373 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.883988 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util\") pod \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.884513 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clp5j\" (UniqueName: \"kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j\") pod \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.884659 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle\") pod \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\" (UID: \"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e\") " Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.885814 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle" (OuterVolumeSpecName: "bundle") pod "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" (UID: "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.893734 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j" (OuterVolumeSpecName: "kube-api-access-clp5j") pod "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" (UID: "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e"). InnerVolumeSpecName "kube-api-access-clp5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.895230 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util" (OuterVolumeSpecName: "util") pod "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" (UID: "ad178c32-02aa-40e7-aa77-35e3c5b9bd0e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.986297 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.986348 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clp5j\" (UniqueName: \"kubernetes.io/projected/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-kube-api-access-clp5j\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:23 crc kubenswrapper[4745]: I0121 10:50:23.986370 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ad178c32-02aa-40e7-aa77-35e3c5b9bd0e-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:24 crc kubenswrapper[4745]: I0121 10:50:24.302133 4745 generic.go:334] "Generic (PLEG): container finished" podID="77188672-dc9b-4158-a11d-d72ebfb12310" containerID="0d9e3e16ddeb16cc69582e1b79875c06f4e82d95ba892919a1e8055306f87e99" exitCode=0 Jan 21 10:50:24 crc kubenswrapper[4745]: I0121 10:50:24.302250 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerDied","Data":"0d9e3e16ddeb16cc69582e1b79875c06f4e82d95ba892919a1e8055306f87e99"} Jan 21 10:50:24 crc kubenswrapper[4745]: I0121 10:50:24.310075 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" event={"ID":"ad178c32-02aa-40e7-aa77-35e3c5b9bd0e","Type":"ContainerDied","Data":"9610e3feec973abde605ba131889cdb1b03066953f5a8546449bd8759012b7f9"} Jan 21 10:50:24 crc kubenswrapper[4745]: I0121 10:50:24.310117 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9610e3feec973abde605ba131889cdb1b03066953f5a8546449bd8759012b7f9" Jan 21 10:50:24 crc kubenswrapper[4745]: I0121 10:50:24.310177 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk" Jan 21 10:50:25 crc kubenswrapper[4745]: I0121 10:50:25.319829 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerStarted","Data":"998990423e8ec7f377f93be3f5d95bdb7b9cbdd1515ee5f19438b8cf22d969d5"} Jan 21 10:50:25 crc kubenswrapper[4745]: I0121 10:50:25.348105 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8dqv" podStartSLOduration=2.892163741 podStartE2EDuration="10.348065571s" podCreationTimestamp="2026-01-21 10:50:15 +0000 UTC" firstStartedPulling="2026-01-21 10:50:17.232439823 +0000 UTC m=+801.693227421" lastFinishedPulling="2026-01-21 10:50:24.688341653 +0000 UTC m=+809.149129251" observedRunningTime="2026-01-21 10:50:25.340791047 +0000 UTC m=+809.801578665" watchObservedRunningTime="2026-01-21 10:50:25.348065571 +0000 UTC m=+809.808853169" Jan 21 10:50:25 crc kubenswrapper[4745]: I0121 10:50:25.672251 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:25 crc kubenswrapper[4745]: I0121 10:50:25.672320 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:26 crc kubenswrapper[4745]: I0121 10:50:26.716415 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v8dqv" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="registry-server" probeResult="failure" output=< Jan 21 10:50:26 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 10:50:26 crc kubenswrapper[4745]: > Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.097626 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-27x5h"] Jan 21 10:50:30 crc kubenswrapper[4745]: E0121 10:50:30.098398 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="pull" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.098414 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="pull" Jan 21 10:50:30 crc kubenswrapper[4745]: E0121 10:50:30.098436 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="extract" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.098442 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="extract" Jan 21 10:50:30 crc kubenswrapper[4745]: E0121 10:50:30.098455 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="util" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.098461 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="util" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.098583 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad178c32-02aa-40e7-aa77-35e3c5b9bd0e" containerName="extract" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.099077 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.102135 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rd8sf" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.104088 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.104325 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.119564 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-27x5h"] Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.195055 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqsvf\" (UniqueName: \"kubernetes.io/projected/26a2f875-6a73-4039-b234-7f628c77bdda-kube-api-access-xqsvf\") pod \"nmstate-operator-646758c888-27x5h\" (UID: \"26a2f875-6a73-4039-b234-7f628c77bdda\") " pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.296342 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqsvf\" (UniqueName: \"kubernetes.io/projected/26a2f875-6a73-4039-b234-7f628c77bdda-kube-api-access-xqsvf\") pod \"nmstate-operator-646758c888-27x5h\" (UID: \"26a2f875-6a73-4039-b234-7f628c77bdda\") " pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.319306 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqsvf\" (UniqueName: \"kubernetes.io/projected/26a2f875-6a73-4039-b234-7f628c77bdda-kube-api-access-xqsvf\") pod \"nmstate-operator-646758c888-27x5h\" (UID: \"26a2f875-6a73-4039-b234-7f628c77bdda\") " pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.415086 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" Jan 21 10:50:30 crc kubenswrapper[4745]: I0121 10:50:30.728465 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-27x5h"] Jan 21 10:50:31 crc kubenswrapper[4745]: I0121 10:50:31.357607 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" event={"ID":"26a2f875-6a73-4039-b234-7f628c77bdda","Type":"ContainerStarted","Data":"06d4a13b66eebbaca424c8434a8ec57015419d3787e3bd97d0a9e44bcdaa9ecc"} Jan 21 10:50:34 crc kubenswrapper[4745]: I0121 10:50:34.386313 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" event={"ID":"26a2f875-6a73-4039-b234-7f628c77bdda","Type":"ContainerStarted","Data":"a1ed3ad725b1deea61f23b063d19fd6c55acf26249393851af0e94cfbce04fab"} Jan 21 10:50:34 crc kubenswrapper[4745]: I0121 10:50:34.414365 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-27x5h" podStartSLOduration=1.457792773 podStartE2EDuration="4.414326052s" podCreationTimestamp="2026-01-21 10:50:30 +0000 UTC" firstStartedPulling="2026-01-21 10:50:30.755298331 +0000 UTC m=+815.216085929" lastFinishedPulling="2026-01-21 10:50:33.71183161 +0000 UTC m=+818.172619208" observedRunningTime="2026-01-21 10:50:34.409944889 +0000 UTC m=+818.870732497" watchObservedRunningTime="2026-01-21 10:50:34.414326052 +0000 UTC m=+818.875113650" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.459489 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9t5nq"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.460632 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.465867 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-9fkr9" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.478624 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.479471 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.492333 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.493311 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.493375 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxhf\" (UniqueName: \"kubernetes.io/projected/02756c63-b6cc-42ef-ba04-fbd6127ccfa7-kube-api-access-wsxhf\") pod \"nmstate-metrics-54757c584b-9t5nq\" (UID: \"02756c63-b6cc-42ef-ba04-fbd6127ccfa7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.493407 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v626q\" (UniqueName: \"kubernetes.io/projected/89a613eb-ec6f-48dc-97d8-38e59281d04e-kube-api-access-v626q\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.514705 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-bpmz2"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.515515 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.518214 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9t5nq"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.526127 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594044 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594093 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-nmstate-lock\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594124 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxhf\" (UniqueName: \"kubernetes.io/projected/02756c63-b6cc-42ef-ba04-fbd6127ccfa7-kube-api-access-wsxhf\") pod \"nmstate-metrics-54757c584b-9t5nq\" (UID: \"02756c63-b6cc-42ef-ba04-fbd6127ccfa7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594149 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v626q\" (UniqueName: \"kubernetes.io/projected/89a613eb-ec6f-48dc-97d8-38e59281d04e-kube-api-access-v626q\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594192 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vpx\" (UniqueName: \"kubernetes.io/projected/976354ad-a346-409e-893a-d8edb62a6148-kube-api-access-r5vpx\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-dbus-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.594253 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-ovs-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: E0121 10:50:35.594336 4745 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 21 10:50:35 crc kubenswrapper[4745]: E0121 10:50:35.594383 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair podName:89a613eb-ec6f-48dc-97d8-38e59281d04e nodeName:}" failed. No retries permitted until 2026-01-21 10:50:36.094364337 +0000 UTC m=+820.555151935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-k4fch" (UID: "89a613eb-ec6f-48dc-97d8-38e59281d04e") : secret "openshift-nmstate-webhook" not found Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.628750 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v626q\" (UniqueName: \"kubernetes.io/projected/89a613eb-ec6f-48dc-97d8-38e59281d04e-kube-api-access-v626q\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.638684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxhf\" (UniqueName: \"kubernetes.io/projected/02756c63-b6cc-42ef-ba04-fbd6127ccfa7-kube-api-access-wsxhf\") pod \"nmstate-metrics-54757c584b-9t5nq\" (UID: \"02756c63-b6cc-42ef-ba04-fbd6127ccfa7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.695752 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5vpx\" (UniqueName: \"kubernetes.io/projected/976354ad-a346-409e-893a-d8edb62a6148-kube-api-access-r5vpx\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.695813 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-dbus-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.695871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-ovs-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.696126 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-ovs-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.696260 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-dbus-socket\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.696405 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-nmstate-lock\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.696476 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/976354ad-a346-409e-893a-d8edb62a6148-nmstate-lock\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.713836 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.714482 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.724313 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.724417 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.724728 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2b6q2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.729398 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5vpx\" (UniqueName: \"kubernetes.io/projected/976354ad-a346-409e-893a-d8edb62a6148-kube-api-access-r5vpx\") pod \"nmstate-handler-bpmz2\" (UID: \"976354ad-a346-409e-893a-d8edb62a6148\") " pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.756454 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.774466 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.781030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.832245 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.841968 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.900846 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f632930-37d6-4083-80d2-e56d394f5289-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.900909 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f632930-37d6-4083-80d2-e56d394f5289-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.900961 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxzt\" (UniqueName: \"kubernetes.io/projected/5f632930-37d6-4083-80d2-e56d394f5289-kube-api-access-pwxzt\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.936811 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-675b995478-g46rl"] Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.937479 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:35 crc kubenswrapper[4745]: I0121 10:50:35.965494 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-675b995478-g46rl"] Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.002563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f632930-37d6-4083-80d2-e56d394f5289-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.002629 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f632930-37d6-4083-80d2-e56d394f5289-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.002661 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwxzt\" (UniqueName: \"kubernetes.io/projected/5f632930-37d6-4083-80d2-e56d394f5289-kube-api-access-pwxzt\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.018338 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5f632930-37d6-4083-80d2-e56d394f5289-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.030036 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/5f632930-37d6-4083-80d2-e56d394f5289-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.061212 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwxzt\" (UniqueName: \"kubernetes.io/projected/5f632930-37d6-4083-80d2-e56d394f5289-kube-api-access-pwxzt\") pod \"nmstate-console-plugin-7754f76f8b-54v72\" (UID: \"5f632930-37d6-4083-80d2-e56d394f5289\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.070281 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104386 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hm8m4\" (UniqueName: \"kubernetes.io/projected/f2204ce6-1eef-4937-91b4-eb137c6e7077-kube-api-access-hm8m4\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104572 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-oauth-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104678 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-trusted-ca-bundle\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104766 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104860 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-service-ca\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.104997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.105130 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-oauth-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.105459 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.111285 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/89a613eb-ec6f-48dc-97d8-38e59281d04e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-k4fch\" (UID: \"89a613eb-ec6f-48dc-97d8-38e59281d04e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207282 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hm8m4\" (UniqueName: \"kubernetes.io/projected/f2204ce6-1eef-4937-91b4-eb137c6e7077-kube-api-access-hm8m4\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207354 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-oauth-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207378 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-trusted-ca-bundle\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207398 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-service-ca\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.207435 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-oauth-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.209862 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-oauth-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.210025 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.210355 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-service-ca\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.210967 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2204ce6-1eef-4937-91b4-eb137c6e7077-trusted-ca-bundle\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.212606 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-oauth-config\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.213900 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f2204ce6-1eef-4937-91b4-eb137c6e7077-console-serving-cert\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.234583 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hm8m4\" (UniqueName: \"kubernetes.io/projected/f2204ce6-1eef-4937-91b4-eb137c6e7077-kube-api-access-hm8m4\") pod \"console-675b995478-g46rl\" (UID: \"f2204ce6-1eef-4937-91b4-eb137c6e7077\") " pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.262122 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.348730 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72"] Jan 21 10:50:36 crc kubenswrapper[4745]: W0121 10:50:36.360736 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f632930_37d6_4083_80d2_e56d394f5289.slice/crio-1a77b4e37775e818db8d2eae4a2c87434c73c0102707aa248db0c1413efd2e47 WatchSource:0}: Error finding container 1a77b4e37775e818db8d2eae4a2c87434c73c0102707aa248db0c1413efd2e47: Status 404 returned error can't find the container with id 1a77b4e37775e818db8d2eae4a2c87434c73c0102707aa248db0c1413efd2e47 Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.380301 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-9t5nq"] Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.401995 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.413299 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" event={"ID":"02756c63-b6cc-42ef-ba04-fbd6127ccfa7","Type":"ContainerStarted","Data":"b5e9c6695bba3957f822a014ba765f53f9444f88d0fbee520dcb5aee9f0af891"} Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.416030 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" event={"ID":"5f632930-37d6-4083-80d2-e56d394f5289","Type":"ContainerStarted","Data":"1a77b4e37775e818db8d2eae4a2c87434c73c0102707aa248db0c1413efd2e47"} Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.417137 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bpmz2" event={"ID":"976354ad-a346-409e-893a-d8edb62a6148","Type":"ContainerStarted","Data":"2072287ddd4437e890abcc6907966a8cf6ed2f730facbcf9f87ef451d497ccae"} Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.573724 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-675b995478-g46rl"] Jan 21 10:50:36 crc kubenswrapper[4745]: I0121 10:50:36.689407 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch"] Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.425669 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675b995478-g46rl" event={"ID":"f2204ce6-1eef-4937-91b4-eb137c6e7077","Type":"ContainerStarted","Data":"7639004c3867aa831e7e83a3d565e21a1287314f26946aa0b34dd64fb061c30d"} Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.426838 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-675b995478-g46rl" event={"ID":"f2204ce6-1eef-4937-91b4-eb137c6e7077","Type":"ContainerStarted","Data":"e856ad499491a31e9962682cfbe85bb3ace703dae4139beaf6b68fada9f2fc30"} Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.431061 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" event={"ID":"89a613eb-ec6f-48dc-97d8-38e59281d04e","Type":"ContainerStarted","Data":"058fb4ea9ed1d5ead3bae75aa4a74bee105620b378effc4045bf45ab011ee336"} Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.454642 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-675b995478-g46rl" podStartSLOduration=2.454613007 podStartE2EDuration="2.454613007s" podCreationTimestamp="2026-01-21 10:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:50:37.451192811 +0000 UTC m=+821.911980409" watchObservedRunningTime="2026-01-21 10:50:37.454613007 +0000 UTC m=+821.915400725" Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.828376 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:37 crc kubenswrapper[4745]: I0121 10:50:37.828749 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8dqv" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="registry-server" containerID="cri-o://998990423e8ec7f377f93be3f5d95bdb7b9cbdd1515ee5f19438b8cf22d969d5" gracePeriod=2 Jan 21 10:50:38 crc kubenswrapper[4745]: I0121 10:50:38.492899 4745 generic.go:334] "Generic (PLEG): container finished" podID="77188672-dc9b-4158-a11d-d72ebfb12310" containerID="998990423e8ec7f377f93be3f5d95bdb7b9cbdd1515ee5f19438b8cf22d969d5" exitCode=0 Jan 21 10:50:38 crc kubenswrapper[4745]: I0121 10:50:38.493826 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerDied","Data":"998990423e8ec7f377f93be3f5d95bdb7b9cbdd1515ee5f19438b8cf22d969d5"} Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.187080 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.321484 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities\") pod \"77188672-dc9b-4158-a11d-d72ebfb12310\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.321660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-spw6l\" (UniqueName: \"kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l\") pod \"77188672-dc9b-4158-a11d-d72ebfb12310\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.321874 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content\") pod \"77188672-dc9b-4158-a11d-d72ebfb12310\" (UID: \"77188672-dc9b-4158-a11d-d72ebfb12310\") " Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.323548 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities" (OuterVolumeSpecName: "utilities") pod "77188672-dc9b-4158-a11d-d72ebfb12310" (UID: "77188672-dc9b-4158-a11d-d72ebfb12310"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.324797 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.328548 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l" (OuterVolumeSpecName: "kube-api-access-spw6l") pod "77188672-dc9b-4158-a11d-d72ebfb12310" (UID: "77188672-dc9b-4158-a11d-d72ebfb12310"). InnerVolumeSpecName "kube-api-access-spw6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.425731 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-spw6l\" (UniqueName: \"kubernetes.io/projected/77188672-dc9b-4158-a11d-d72ebfb12310-kube-api-access-spw6l\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.446474 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77188672-dc9b-4158-a11d-d72ebfb12310" (UID: "77188672-dc9b-4158-a11d-d72ebfb12310"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.507149 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8dqv" event={"ID":"77188672-dc9b-4158-a11d-d72ebfb12310","Type":"ContainerDied","Data":"0ebd9dfacbf398d3db1daf928a569fb0418322d18c4df959cc2e172ced3028aa"} Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.507228 4745 scope.go:117] "RemoveContainer" containerID="998990423e8ec7f377f93be3f5d95bdb7b9cbdd1515ee5f19438b8cf22d969d5" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.507397 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8dqv" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.527461 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77188672-dc9b-4158-a11d-d72ebfb12310-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.529740 4745 scope.go:117] "RemoveContainer" containerID="0d9e3e16ddeb16cc69582e1b79875c06f4e82d95ba892919a1e8055306f87e99" Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.539841 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.543015 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8dqv"] Jan 21 10:50:40 crc kubenswrapper[4745]: I0121 10:50:40.571046 4745 scope.go:117] "RemoveContainer" containerID="4aa404a100dc6986a6fdb6d764bdbf3fbd0e46c853dcdfc15f37d8294a5781fe" Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.516594 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" event={"ID":"02756c63-b6cc-42ef-ba04-fbd6127ccfa7","Type":"ContainerStarted","Data":"e37b9d5848536a0e43e628ce78b5f0cbd78db30db039ce17bba7018e7960fd82"} Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.518751 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" event={"ID":"5f632930-37d6-4083-80d2-e56d394f5289","Type":"ContainerStarted","Data":"9f5433fae1875a5b74419f225895991ab393b74a44a62d8379962791a52c99d8"} Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.522858 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-bpmz2" event={"ID":"976354ad-a346-409e-893a-d8edb62a6148","Type":"ContainerStarted","Data":"e7e7a383669ecfa8b5c4c300ad99a9221e8bd72834005037570fbc023778cf3d"} Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.523035 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.526897 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" event={"ID":"89a613eb-ec6f-48dc-97d8-38e59281d04e","Type":"ContainerStarted","Data":"80600a30de4dceac51d9140ad32b800e2a01abf709073b6ce57442e325cd5fa7"} Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.527104 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.550874 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" podStartSLOduration=2.673033792 podStartE2EDuration="6.550857178s" podCreationTimestamp="2026-01-21 10:50:35 +0000 UTC" firstStartedPulling="2026-01-21 10:50:36.363864794 +0000 UTC m=+820.824652392" lastFinishedPulling="2026-01-21 10:50:40.24168818 +0000 UTC m=+824.702475778" observedRunningTime="2026-01-21 10:50:41.537777109 +0000 UTC m=+825.998564767" watchObservedRunningTime="2026-01-21 10:50:41.550857178 +0000 UTC m=+826.011644776" Jan 21 10:50:41 crc kubenswrapper[4745]: I0121 10:50:41.560116 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-bpmz2" podStartSLOduration=2.170290679 podStartE2EDuration="6.560099257s" podCreationTimestamp="2026-01-21 10:50:35 +0000 UTC" firstStartedPulling="2026-01-21 10:50:35.879112028 +0000 UTC m=+820.339899626" lastFinishedPulling="2026-01-21 10:50:40.268920606 +0000 UTC m=+824.729708204" observedRunningTime="2026-01-21 10:50:41.557725941 +0000 UTC m=+826.018513549" watchObservedRunningTime="2026-01-21 10:50:41.560099257 +0000 UTC m=+826.020886865" Jan 21 10:50:42 crc kubenswrapper[4745]: I0121 10:50:42.009396 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" path="/var/lib/kubelet/pods/77188672-dc9b-4158-a11d-d72ebfb12310/volumes" Jan 21 10:50:43 crc kubenswrapper[4745]: I0121 10:50:43.541981 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" event={"ID":"02756c63-b6cc-42ef-ba04-fbd6127ccfa7","Type":"ContainerStarted","Data":"13a4fcf26bc27fb50a8927029e13d1e25ed7b308f39e64de7f6c694954132426"} Jan 21 10:50:43 crc kubenswrapper[4745]: I0121 10:50:43.565649 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" podStartSLOduration=5.031019284 podStartE2EDuration="8.565628355s" podCreationTimestamp="2026-01-21 10:50:35 +0000 UTC" firstStartedPulling="2026-01-21 10:50:36.716085672 +0000 UTC m=+821.176873270" lastFinishedPulling="2026-01-21 10:50:40.250694743 +0000 UTC m=+824.711482341" observedRunningTime="2026-01-21 10:50:41.581657624 +0000 UTC m=+826.042445232" watchObservedRunningTime="2026-01-21 10:50:43.565628355 +0000 UTC m=+828.026415973" Jan 21 10:50:45 crc kubenswrapper[4745]: I0121 10:50:45.860313 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-bpmz2" Jan 21 10:50:45 crc kubenswrapper[4745]: I0121 10:50:45.866949 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:50:45 crc kubenswrapper[4745]: I0121 10:50:45.867020 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:50:45 crc kubenswrapper[4745]: I0121 10:50:45.883290 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-9t5nq" podStartSLOduration=4.364932616 podStartE2EDuration="10.883268622s" podCreationTimestamp="2026-01-21 10:50:35 +0000 UTC" firstStartedPulling="2026-01-21 10:50:36.38580008 +0000 UTC m=+820.846587678" lastFinishedPulling="2026-01-21 10:50:42.904136086 +0000 UTC m=+827.364923684" observedRunningTime="2026-01-21 10:50:43.562067925 +0000 UTC m=+828.022855553" watchObservedRunningTime="2026-01-21 10:50:45.883268622 +0000 UTC m=+830.344056240" Jan 21 10:50:46 crc kubenswrapper[4745]: I0121 10:50:46.263606 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:46 crc kubenswrapper[4745]: I0121 10:50:46.263699 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:46 crc kubenswrapper[4745]: I0121 10:50:46.273785 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:46 crc kubenswrapper[4745]: I0121 10:50:46.563550 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-675b995478-g46rl" Jan 21 10:50:46 crc kubenswrapper[4745]: I0121 10:50:46.620084 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:50:56 crc kubenswrapper[4745]: I0121 10:50:56.411915 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-k4fch" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.299632 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st"] Jan 21 10:51:09 crc kubenswrapper[4745]: E0121 10:51:09.300810 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="extract-content" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.300826 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="extract-content" Jan 21 10:51:09 crc kubenswrapper[4745]: E0121 10:51:09.300844 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="registry-server" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.300850 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="registry-server" Jan 21 10:51:09 crc kubenswrapper[4745]: E0121 10:51:09.300861 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="extract-utilities" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.300867 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="extract-utilities" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.301002 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="77188672-dc9b-4158-a11d-d72ebfb12310" containerName="registry-server" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.301990 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.304245 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.316085 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st"] Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.363584 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.363693 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7464t\" (UniqueName: \"kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.363789 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.465500 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7464t\" (UniqueName: \"kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.465920 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.465994 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.466378 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.470966 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.491856 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7464t\" (UniqueName: \"kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.619549 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:09 crc kubenswrapper[4745]: I0121 10:51:09.835706 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st"] Jan 21 10:51:10 crc kubenswrapper[4745]: I0121 10:51:10.735464 4745 generic.go:334] "Generic (PLEG): container finished" podID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerID="39055d4b210d1d50613b8ee9d5fbb6d3c7c308852396321771302b2ba4b2f991" exitCode=0 Jan 21 10:51:10 crc kubenswrapper[4745]: I0121 10:51:10.735557 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" event={"ID":"52ff2424-850f-47fd-a0c4-fc91fca87048","Type":"ContainerDied","Data":"39055d4b210d1d50613b8ee9d5fbb6d3c7c308852396321771302b2ba4b2f991"} Jan 21 10:51:10 crc kubenswrapper[4745]: I0121 10:51:10.735599 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" event={"ID":"52ff2424-850f-47fd-a0c4-fc91fca87048","Type":"ContainerStarted","Data":"74a9b7039efb5d41a3b7951c0c5701f6eda12c3007400eadfa82497190ad7446"} Jan 21 10:51:11 crc kubenswrapper[4745]: I0121 10:51:11.688977 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-j4phh" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" containerID="cri-o://4e07c5a2f3d033b0e81dd61e9f6fb02e10c065b9399cc0297873f9ae965f9184" gracePeriod=15 Jan 21 10:51:12 crc kubenswrapper[4745]: I0121 10:51:12.778434 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j4phh_284744f3-7eb6-4977-87c8-5c311188f840/console/0.log" Jan 21 10:51:12 crc kubenswrapper[4745]: I0121 10:51:12.778980 4745 generic.go:334] "Generic (PLEG): container finished" podID="284744f3-7eb6-4977-87c8-5c311188f840" containerID="4e07c5a2f3d033b0e81dd61e9f6fb02e10c065b9399cc0297873f9ae965f9184" exitCode=2 Jan 21 10:51:12 crc kubenswrapper[4745]: I0121 10:51:12.779060 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j4phh" event={"ID":"284744f3-7eb6-4977-87c8-5c311188f840","Type":"ContainerDied","Data":"4e07c5a2f3d033b0e81dd61e9f6fb02e10c065b9399cc0297873f9ae965f9184"} Jan 21 10:51:12 crc kubenswrapper[4745]: I0121 10:51:12.921115 4745 patch_prober.go:28] interesting pod/console-f9d7485db-j4phh container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 21 10:51:12 crc kubenswrapper[4745]: I0121 10:51:12.921215 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-j4phh" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.263559 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j4phh_284744f3-7eb6-4977-87c8-5c311188f840/console/0.log" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.263631 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326031 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326115 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326159 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkwst\" (UniqueName: \"kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326189 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326217 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326257 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.326311 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca\") pod \"284744f3-7eb6-4977-87c8-5c311188f840\" (UID: \"284744f3-7eb6-4977-87c8-5c311188f840\") " Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.327639 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.327785 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.328604 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config" (OuterVolumeSpecName: "console-config") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.329281 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca" (OuterVolumeSpecName: "service-ca") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.334921 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.335440 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.346452 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst" (OuterVolumeSpecName: "kube-api-access-qkwst") pod "284744f3-7eb6-4977-87c8-5c311188f840" (UID: "284744f3-7eb6-4977-87c8-5c311188f840"). InnerVolumeSpecName "kube-api-access-qkwst". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.427955 4745 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-service-ca\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428306 4745 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428316 4745 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428324 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkwst\" (UniqueName: \"kubernetes.io/projected/284744f3-7eb6-4977-87c8-5c311188f840-kube-api-access-qkwst\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428336 4745 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428345 4745 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/284744f3-7eb6-4977-87c8-5c311188f840-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.428353 4745 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/284744f3-7eb6-4977-87c8-5c311188f840-console-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.789934 4745 generic.go:334] "Generic (PLEG): container finished" podID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerID="36dfc9c1940c33d75216653cbe1cc3c5b18c076ea3795d627e4d5564fdc4b6f6" exitCode=0 Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.790492 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" event={"ID":"52ff2424-850f-47fd-a0c4-fc91fca87048","Type":"ContainerDied","Data":"36dfc9c1940c33d75216653cbe1cc3c5b18c076ea3795d627e4d5564fdc4b6f6"} Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.792467 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-j4phh_284744f3-7eb6-4977-87c8-5c311188f840/console/0.log" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.792584 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-j4phh" event={"ID":"284744f3-7eb6-4977-87c8-5c311188f840","Type":"ContainerDied","Data":"368575159fa50e38f5f63e47eb5df159cab8f85feda7e139b8cca66e049e585a"} Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.792634 4745 scope.go:117] "RemoveContainer" containerID="4e07c5a2f3d033b0e81dd61e9f6fb02e10c065b9399cc0297873f9ae965f9184" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.792835 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-j4phh" Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.840221 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:51:13 crc kubenswrapper[4745]: I0121 10:51:13.846464 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-j4phh"] Jan 21 10:51:14 crc kubenswrapper[4745]: I0121 10:51:14.008396 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="284744f3-7eb6-4977-87c8-5c311188f840" path="/var/lib/kubelet/pods/284744f3-7eb6-4977-87c8-5c311188f840/volumes" Jan 21 10:51:14 crc kubenswrapper[4745]: I0121 10:51:14.806045 4745 generic.go:334] "Generic (PLEG): container finished" podID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerID="9b6ccaed1469f3e6b42a6f2be13e7d46603096769da0dcdf2c783d063510a3d4" exitCode=0 Jan 21 10:51:14 crc kubenswrapper[4745]: I0121 10:51:14.806098 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" event={"ID":"52ff2424-850f-47fd-a0c4-fc91fca87048","Type":"ContainerDied","Data":"9b6ccaed1469f3e6b42a6f2be13e7d46603096769da0dcdf2c783d063510a3d4"} Jan 21 10:51:15 crc kubenswrapper[4745]: I0121 10:51:15.866645 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:51:15 crc kubenswrapper[4745]: I0121 10:51:15.867032 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:51:15 crc kubenswrapper[4745]: I0121 10:51:15.867730 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:51:15 crc kubenswrapper[4745]: I0121 10:51:15.868388 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:51:15 crc kubenswrapper[4745]: I0121 10:51:15.868449 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3" gracePeriod=600 Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.191709 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.268600 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7464t\" (UniqueName: \"kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t\") pod \"52ff2424-850f-47fd-a0c4-fc91fca87048\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.268662 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util\") pod \"52ff2424-850f-47fd-a0c4-fc91fca87048\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.268696 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle\") pod \"52ff2424-850f-47fd-a0c4-fc91fca87048\" (UID: \"52ff2424-850f-47fd-a0c4-fc91fca87048\") " Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.270129 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle" (OuterVolumeSpecName: "bundle") pod "52ff2424-850f-47fd-a0c4-fc91fca87048" (UID: "52ff2424-850f-47fd-a0c4-fc91fca87048"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.275459 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t" (OuterVolumeSpecName: "kube-api-access-7464t") pod "52ff2424-850f-47fd-a0c4-fc91fca87048" (UID: "52ff2424-850f-47fd-a0c4-fc91fca87048"). InnerVolumeSpecName "kube-api-access-7464t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.284149 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util" (OuterVolumeSpecName: "util") pod "52ff2424-850f-47fd-a0c4-fc91fca87048" (UID: "52ff2424-850f-47fd-a0c4-fc91fca87048"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.370826 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7464t\" (UniqueName: \"kubernetes.io/projected/52ff2424-850f-47fd-a0c4-fc91fca87048-kube-api-access-7464t\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.370874 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.370896 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/52ff2424-850f-47fd-a0c4-fc91fca87048-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.822273 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3" exitCode=0 Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.822364 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3"} Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.822763 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59"} Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.822808 4745 scope.go:117] "RemoveContainer" containerID="afdf3a4d67c346d0632a443ca9dab222b7a63eeb4c78313a794bd20986cb3242" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.827127 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" event={"ID":"52ff2424-850f-47fd-a0c4-fc91fca87048","Type":"ContainerDied","Data":"74a9b7039efb5d41a3b7951c0c5701f6eda12c3007400eadfa82497190ad7446"} Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.827183 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74a9b7039efb5d41a3b7951c0c5701f6eda12c3007400eadfa82497190ad7446" Jan 21 10:51:16 crc kubenswrapper[4745]: I0121 10:51:16.827155 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.561421 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr"] Jan 21 10:51:24 crc kubenswrapper[4745]: E0121 10:51:24.562281 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="util" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562296 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="util" Jan 21 10:51:24 crc kubenswrapper[4745]: E0121 10:51:24.562309 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562317 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" Jan 21 10:51:24 crc kubenswrapper[4745]: E0121 10:51:24.562334 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="pull" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562342 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="pull" Jan 21 10:51:24 crc kubenswrapper[4745]: E0121 10:51:24.562354 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="extract" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562362 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="extract" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562487 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="52ff2424-850f-47fd-a0c4-fc91fca87048" containerName="extract" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562502 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="284744f3-7eb6-4977-87c8-5c311188f840" containerName="console" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.562990 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.569384 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.573068 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.573419 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.579389 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.580885 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tt7jc" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.592232 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr"] Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.684650 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-apiservice-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.684711 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrmfs\" (UniqueName: \"kubernetes.io/projected/cf161197-4160-49ab-a126-edca468534b7-kube-api-access-lrmfs\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.684846 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-webhook-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.785550 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-webhook-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.785901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-apiservice-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.785928 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrmfs\" (UniqueName: \"kubernetes.io/projected/cf161197-4160-49ab-a126-edca468534b7-kube-api-access-lrmfs\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.800616 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-apiservice-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.823454 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrmfs\" (UniqueName: \"kubernetes.io/projected/cf161197-4160-49ab-a126-edca468534b7-kube-api-access-lrmfs\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.827988 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf161197-4160-49ab-a126-edca468534b7-webhook-cert\") pod \"metallb-operator-controller-manager-65d59f8cf8-8xqnr\" (UID: \"cf161197-4160-49ab-a126-edca468534b7\") " pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:24 crc kubenswrapper[4745]: I0121 10:51:24.879085 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.116794 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt"] Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.117726 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.124488 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.124763 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.128709 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2c8cl" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.196842 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58552\" (UniqueName: \"kubernetes.io/projected/1be9da42-8db6-47b9-b7ec-788b04db264d-kube-api-access-58552\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.196917 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-webhook-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.196945 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-apiservice-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.200770 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt"] Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.296840 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr"] Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.298383 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-webhook-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.298444 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-apiservice-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.298499 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58552\" (UniqueName: \"kubernetes.io/projected/1be9da42-8db6-47b9-b7ec-788b04db264d-kube-api-access-58552\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.304645 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-apiservice-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.309366 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/1be9da42-8db6-47b9-b7ec-788b04db264d-webhook-cert\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.322490 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58552\" (UniqueName: \"kubernetes.io/projected/1be9da42-8db6-47b9-b7ec-788b04db264d-kube-api-access-58552\") pod \"metallb-operator-webhook-server-6b7c494555-zdlbt\" (UID: \"1be9da42-8db6-47b9-b7ec-788b04db264d\") " pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.458362 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.886936 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" event={"ID":"cf161197-4160-49ab-a126-edca468534b7","Type":"ContainerStarted","Data":"2e771f8cb9d69718f51978ce1b6044144b025dec8de268ea5e73463929870924"} Jan 21 10:51:25 crc kubenswrapper[4745]: I0121 10:51:25.969299 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt"] Jan 21 10:51:26 crc kubenswrapper[4745]: I0121 10:51:26.894241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" event={"ID":"1be9da42-8db6-47b9-b7ec-788b04db264d","Type":"ContainerStarted","Data":"4a04e2bfcc633789c5841d3ed856a66912ef8d9189757e8f2c81bdc3bc3e5317"} Jan 21 10:51:29 crc kubenswrapper[4745]: I0121 10:51:29.913585 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" event={"ID":"cf161197-4160-49ab-a126-edca468534b7","Type":"ContainerStarted","Data":"bacb13f5d19a7b88c40ebcf9bb6bb197ae9abb30f59f91d7822e22c71707791a"} Jan 21 10:51:29 crc kubenswrapper[4745]: I0121 10:51:29.913999 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:51:29 crc kubenswrapper[4745]: I0121 10:51:29.942693 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" podStartSLOduration=2.707568828 podStartE2EDuration="5.942677699s" podCreationTimestamp="2026-01-21 10:51:24 +0000 UTC" firstStartedPulling="2026-01-21 10:51:25.310075394 +0000 UTC m=+869.770862992" lastFinishedPulling="2026-01-21 10:51:28.545184265 +0000 UTC m=+873.005971863" observedRunningTime="2026-01-21 10:51:29.938569062 +0000 UTC m=+874.399356660" watchObservedRunningTime="2026-01-21 10:51:29.942677699 +0000 UTC m=+874.403465297" Jan 21 10:51:36 crc kubenswrapper[4745]: I0121 10:51:36.958514 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" event={"ID":"1be9da42-8db6-47b9-b7ec-788b04db264d","Type":"ContainerStarted","Data":"4f41f974f27eb1c67082a378c4a3e10071b2cc345b624cc0f1806435987e34df"} Jan 21 10:51:36 crc kubenswrapper[4745]: I0121 10:51:36.959589 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:51:36 crc kubenswrapper[4745]: I0121 10:51:36.984453 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" podStartSLOduration=1.970719791 podStartE2EDuration="11.984438458s" podCreationTimestamp="2026-01-21 10:51:25 +0000 UTC" firstStartedPulling="2026-01-21 10:51:25.977142258 +0000 UTC m=+870.437929856" lastFinishedPulling="2026-01-21 10:51:35.990860925 +0000 UTC m=+880.451648523" observedRunningTime="2026-01-21 10:51:36.976722206 +0000 UTC m=+881.437509804" watchObservedRunningTime="2026-01-21 10:51:36.984438458 +0000 UTC m=+881.445226056" Jan 21 10:51:55 crc kubenswrapper[4745]: I0121 10:51:55.464003 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.171835 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.174502 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.194569 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.277158 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.277278 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtlkl\" (UniqueName: \"kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.277574 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.378444 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.378489 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.378523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtlkl\" (UniqueName: \"kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.379319 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.379597 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.397320 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtlkl\" (UniqueName: \"kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl\") pod \"redhat-marketplace-mcczs\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.490454 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:03 crc kubenswrapper[4745]: I0121 10:52:03.864327 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:04 crc kubenswrapper[4745]: I0121 10:52:04.140503 4745 generic.go:334] "Generic (PLEG): container finished" podID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerID="d1ae8af1ef8d13f7c7dc2041ac7f7b805d4869bb8f5aff815c803c24757321f3" exitCode=0 Jan 21 10:52:04 crc kubenswrapper[4745]: I0121 10:52:04.140579 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerDied","Data":"d1ae8af1ef8d13f7c7dc2041ac7f7b805d4869bb8f5aff815c803c24757321f3"} Jan 21 10:52:04 crc kubenswrapper[4745]: I0121 10:52:04.140608 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerStarted","Data":"6d559a9f09b88c4fb5732d294a44eb9f1d43c4eeb90de6c1c4d603983d9ea41d"} Jan 21 10:52:04 crc kubenswrapper[4745]: I0121 10:52:04.881983 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-65d59f8cf8-8xqnr" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.572768 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9f9vp"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.574925 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.576888 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.577342 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.577653 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cf458" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.599652 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.601121 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.602984 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.627337 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.711550 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-64hm8"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.712758 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714594 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714627 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-startup\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics-certs\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714666 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714689 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vxbl\" (UniqueName: \"kubernetes.io/projected/db2f79cd-c6c7-459f-bf98-002583ba5ddd-kube-api-access-7vxbl\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714722 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-sockets\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714765 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4zqr\" (UniqueName: \"kubernetes.io/projected/5e2a9cf8-053e-4225-b055-45d69ebfaa94-kube-api-access-h4zqr\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714791 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-conf\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.714807 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-reloader\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.719470 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.719840 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.720130 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.719483 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-cjsm5" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.751453 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-lgq6w"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.752705 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.775029 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.785023 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lgq6w"] Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.815963 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-sockets\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816024 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4zqr\" (UniqueName: \"kubernetes.io/projected/5e2a9cf8-053e-4225-b055-45d69ebfaa94-kube-api-access-h4zqr\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816059 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-conf\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816094 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-reloader\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816126 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816155 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-startup\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816174 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics-certs\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816229 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816257 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/88871d5a-093a-41c6-98bf-629e6769ba71-metallb-excludel2\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816276 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vxbl\" (UniqueName: \"kubernetes.io/projected/db2f79cd-c6c7-459f-bf98-002583ba5ddd-kube-api-access-7vxbl\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816300 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qlr\" (UniqueName: \"kubernetes.io/projected/88871d5a-093a-41c6-98bf-629e6769ba71-kube-api-access-b6qlr\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816320 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-metrics-certs\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.816833 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-sockets\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: E0121 10:52:05.816965 4745 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 21 10:52:05 crc kubenswrapper[4745]: E0121 10:52:05.817026 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert podName:5e2a9cf8-053e-4225-b055-45d69ebfaa94 nodeName:}" failed. No retries permitted until 2026-01-21 10:52:06.317003475 +0000 UTC m=+910.777791073 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert") pod "frr-k8s-webhook-server-7df86c4f6c-dq466" (UID: "5e2a9cf8-053e-4225-b055-45d69ebfaa94") : secret "frr-k8s-webhook-server-cert" not found Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.817208 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-reloader\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.817369 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.817648 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-conf\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.818210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/db2f79cd-c6c7-459f-bf98-002583ba5ddd-frr-startup\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.836874 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/db2f79cd-c6c7-459f-bf98-002583ba5ddd-metrics-certs\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.838800 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4zqr\" (UniqueName: \"kubernetes.io/projected/5e2a9cf8-053e-4225-b055-45d69ebfaa94-kube-api-access-h4zqr\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.840418 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vxbl\" (UniqueName: \"kubernetes.io/projected/db2f79cd-c6c7-459f-bf98-002583ba5ddd-kube-api-access-7vxbl\") pod \"frr-k8s-9f9vp\" (UID: \"db2f79cd-c6c7-459f-bf98-002583ba5ddd\") " pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.887123 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917740 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917812 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/88871d5a-093a-41c6-98bf-629e6769ba71-metallb-excludel2\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917838 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-cert\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917876 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qlr\" (UniqueName: \"kubernetes.io/projected/88871d5a-093a-41c6-98bf-629e6769ba71-kube-api-access-b6qlr\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-metrics-certs\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpdnc\" (UniqueName: \"kubernetes.io/projected/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-kube-api-access-fpdnc\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.917944 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-metrics-certs\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:05 crc kubenswrapper[4745]: E0121 10:52:05.918072 4745 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 10:52:05 crc kubenswrapper[4745]: E0121 10:52:05.918112 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist podName:88871d5a-093a-41c6-98bf-629e6769ba71 nodeName:}" failed. No retries permitted until 2026-01-21 10:52:06.418099599 +0000 UTC m=+910.878887197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist") pod "speaker-64hm8" (UID: "88871d5a-093a-41c6-98bf-629e6769ba71") : secret "metallb-memberlist" not found Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.919031 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/88871d5a-093a-41c6-98bf-629e6769ba71-metallb-excludel2\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.924707 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-metrics-certs\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:05 crc kubenswrapper[4745]: I0121 10:52:05.969073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qlr\" (UniqueName: \"kubernetes.io/projected/88871d5a-093a-41c6-98bf-629e6769ba71-kube-api-access-b6qlr\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.018698 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpdnc\" (UniqueName: \"kubernetes.io/projected/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-kube-api-access-fpdnc\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.018749 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-metrics-certs\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.018843 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-cert\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.024297 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-metrics-certs\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.030165 4745 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.044249 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpdnc\" (UniqueName: \"kubernetes.io/projected/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-kube-api-access-fpdnc\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.055350 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ad7637e4-fd78-447b-98ea-20af5f3c5c2a-cert\") pod \"controller-6968d8fdc4-lgq6w\" (UID: \"ad7637e4-fd78-447b-98ea-20af5f3c5c2a\") " pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.089139 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.179648 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerStarted","Data":"479246080d4424bf86032d1faf8d9a989334caf0fdb50c146baebeee93660dfd"} Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.195725 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"9534809f10840ec2b2051d31f2b476ecc3477bf5b8d77d09b55eb9a5b72de78a"} Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.336666 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.353426 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5e2a9cf8-053e-4225-b055-45d69ebfaa94-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-dq466\" (UID: \"5e2a9cf8-053e-4225-b055-45d69ebfaa94\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.439278 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:06 crc kubenswrapper[4745]: E0121 10:52:06.439471 4745 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 21 10:52:06 crc kubenswrapper[4745]: E0121 10:52:06.439559 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist podName:88871d5a-093a-41c6-98bf-629e6769ba71 nodeName:}" failed. No retries permitted until 2026-01-21 10:52:07.439521705 +0000 UTC m=+911.900309303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist") pod "speaker-64hm8" (UID: "88871d5a-093a-41c6-98bf-629e6769ba71") : secret "metallb-memberlist" not found Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.490774 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-lgq6w"] Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.514072 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:06 crc kubenswrapper[4745]: I0121 10:52:06.794708 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466"] Jan 21 10:52:06 crc kubenswrapper[4745]: W0121 10:52:06.802291 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e2a9cf8_053e_4225_b055_45d69ebfaa94.slice/crio-14086d5c3c161e47d502b704d2eabb5f4fae82108631bafd63e6ee0c34c858e1 WatchSource:0}: Error finding container 14086d5c3c161e47d502b704d2eabb5f4fae82108631bafd63e6ee0c34c858e1: Status 404 returned error can't find the container with id 14086d5c3c161e47d502b704d2eabb5f4fae82108631bafd63e6ee0c34c858e1 Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.203838 4745 generic.go:334] "Generic (PLEG): container finished" podID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerID="479246080d4424bf86032d1faf8d9a989334caf0fdb50c146baebeee93660dfd" exitCode=0 Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.203904 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerDied","Data":"479246080d4424bf86032d1faf8d9a989334caf0fdb50c146baebeee93660dfd"} Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.208917 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" event={"ID":"5e2a9cf8-053e-4225-b055-45d69ebfaa94","Type":"ContainerStarted","Data":"14086d5c3c161e47d502b704d2eabb5f4fae82108631bafd63e6ee0c34c858e1"} Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.214241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgq6w" event={"ID":"ad7637e4-fd78-447b-98ea-20af5f3c5c2a","Type":"ContainerStarted","Data":"87963de95069880648d3bc82124161195a0601e4e40bfdc1cb6ddd3c06fd793c"} Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.215330 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.215349 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgq6w" event={"ID":"ad7637e4-fd78-447b-98ea-20af5f3c5c2a","Type":"ContainerStarted","Data":"dbe1bb7be568446f9215c7683ca84b17697751eda12b740f31ea8db7de718f7f"} Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.215363 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-lgq6w" event={"ID":"ad7637e4-fd78-447b-98ea-20af5f3c5c2a","Type":"ContainerStarted","Data":"4e44663673c23d5d4d7fd1a37a00b0a255761a3600553fbb4ddab56f987cb37a"} Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.259354 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-lgq6w" podStartSLOduration=2.259325663 podStartE2EDuration="2.259325663s" podCreationTimestamp="2026-01-21 10:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:52:07.256043838 +0000 UTC m=+911.716831436" watchObservedRunningTime="2026-01-21 10:52:07.259325663 +0000 UTC m=+911.720113261" Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.458941 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.468264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/88871d5a-093a-41c6-98bf-629e6769ba71-memberlist\") pod \"speaker-64hm8\" (UID: \"88871d5a-093a-41c6-98bf-629e6769ba71\") " pod="metallb-system/speaker-64hm8" Jan 21 10:52:07 crc kubenswrapper[4745]: I0121 10:52:07.530381 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-64hm8" Jan 21 10:52:08 crc kubenswrapper[4745]: I0121 10:52:08.227346 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-64hm8" event={"ID":"88871d5a-093a-41c6-98bf-629e6769ba71","Type":"ContainerStarted","Data":"a0197a18ffa290b493f2552700d5e72f81b03b8274fd932fb47d504a0a7bfcd1"} Jan 21 10:52:08 crc kubenswrapper[4745]: I0121 10:52:08.227393 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-64hm8" event={"ID":"88871d5a-093a-41c6-98bf-629e6769ba71","Type":"ContainerStarted","Data":"fa72d33cd30230fcff17a2c0fa098d7f241c7ebbd58f3d8b5c191c67ac6f1a95"} Jan 21 10:52:08 crc kubenswrapper[4745]: I0121 10:52:08.231406 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerStarted","Data":"91a62e640d26858d3099f470f15ebcb72c7ca298bc08a5860d05bd6ea0b5cd4d"} Jan 21 10:52:08 crc kubenswrapper[4745]: I0121 10:52:08.252764 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mcczs" podStartSLOduration=1.657761991 podStartE2EDuration="5.252748053s" podCreationTimestamp="2026-01-21 10:52:03 +0000 UTC" firstStartedPulling="2026-01-21 10:52:04.142578508 +0000 UTC m=+908.603366106" lastFinishedPulling="2026-01-21 10:52:07.73756457 +0000 UTC m=+912.198352168" observedRunningTime="2026-01-21 10:52:08.251244883 +0000 UTC m=+912.712032481" watchObservedRunningTime="2026-01-21 10:52:08.252748053 +0000 UTC m=+912.713535651" Jan 21 10:52:09 crc kubenswrapper[4745]: I0121 10:52:09.242344 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-64hm8" event={"ID":"88871d5a-093a-41c6-98bf-629e6769ba71","Type":"ContainerStarted","Data":"6af4a202ad24af8b85066ba70413cc338f5a1700e75638613e8deb76c01ffeb4"} Jan 21 10:52:09 crc kubenswrapper[4745]: I0121 10:52:09.277088 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-64hm8" podStartSLOduration=4.277070979 podStartE2EDuration="4.277070979s" podCreationTimestamp="2026-01-21 10:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:52:09.271498403 +0000 UTC m=+913.732286001" watchObservedRunningTime="2026-01-21 10:52:09.277070979 +0000 UTC m=+913.737858577" Jan 21 10:52:10 crc kubenswrapper[4745]: I0121 10:52:10.255963 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-64hm8" Jan 21 10:52:13 crc kubenswrapper[4745]: I0121 10:52:13.490634 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:13 crc kubenswrapper[4745]: I0121 10:52:13.490910 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:13 crc kubenswrapper[4745]: I0121 10:52:13.598684 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:14 crc kubenswrapper[4745]: I0121 10:52:14.459749 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:14 crc kubenswrapper[4745]: I0121 10:52:14.511486 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:16 crc kubenswrapper[4745]: I0121 10:52:16.106821 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-lgq6w" Jan 21 10:52:16 crc kubenswrapper[4745]: I0121 10:52:16.332604 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mcczs" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="registry-server" containerID="cri-o://91a62e640d26858d3099f470f15ebcb72c7ca298bc08a5860d05bd6ea0b5cd4d" gracePeriod=2 Jan 21 10:52:17 crc kubenswrapper[4745]: I0121 10:52:17.343113 4745 generic.go:334] "Generic (PLEG): container finished" podID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerID="91a62e640d26858d3099f470f15ebcb72c7ca298bc08a5860d05bd6ea0b5cd4d" exitCode=0 Jan 21 10:52:17 crc kubenswrapper[4745]: I0121 10:52:17.343178 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerDied","Data":"91a62e640d26858d3099f470f15ebcb72c7ca298bc08a5860d05bd6ea0b5cd4d"} Jan 21 10:52:17 crc kubenswrapper[4745]: I0121 10:52:17.537688 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-64hm8" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.374159 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mcczs" event={"ID":"8bf4ac89-473c-4d31-902f-a5cfd122edf0","Type":"ContainerDied","Data":"6d559a9f09b88c4fb5732d294a44eb9f1d43c4eeb90de6c1c4d603983d9ea41d"} Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.374455 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d559a9f09b88c4fb5732d294a44eb9f1d43c4eeb90de6c1c4d603983d9ea41d" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.377213 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.522559 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content\") pod \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.522656 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities\") pod \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.522682 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtlkl\" (UniqueName: \"kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl\") pod \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\" (UID: \"8bf4ac89-473c-4d31-902f-a5cfd122edf0\") " Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.532001 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl" (OuterVolumeSpecName: "kube-api-access-vtlkl") pod "8bf4ac89-473c-4d31-902f-a5cfd122edf0" (UID: "8bf4ac89-473c-4d31-902f-a5cfd122edf0"). InnerVolumeSpecName "kube-api-access-vtlkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.533133 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities" (OuterVolumeSpecName: "utilities") pod "8bf4ac89-473c-4d31-902f-a5cfd122edf0" (UID: "8bf4ac89-473c-4d31-902f-a5cfd122edf0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.542838 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bf4ac89-473c-4d31-902f-a5cfd122edf0" (UID: "8bf4ac89-473c-4d31-902f-a5cfd122edf0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.631740 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.631795 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bf4ac89-473c-4d31-902f-a5cfd122edf0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:18 crc kubenswrapper[4745]: I0121 10:52:18.631806 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtlkl\" (UniqueName: \"kubernetes.io/projected/8bf4ac89-473c-4d31-902f-a5cfd122edf0-kube-api-access-vtlkl\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.385499 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" event={"ID":"5e2a9cf8-053e-4225-b055-45d69ebfaa94","Type":"ContainerStarted","Data":"c19b254b01792856f531286db5af7303c5e5a50b01048747117cf550e9d38a35"} Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.385818 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.387571 4745 generic.go:334] "Generic (PLEG): container finished" podID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerID="3949ed222a6333addae26f2d15ec2bcc1782fdb29a594ec744ddc2889a5ec699" exitCode=0 Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.387626 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerDied","Data":"3949ed222a6333addae26f2d15ec2bcc1782fdb29a594ec744ddc2889a5ec699"} Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.387789 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mcczs" Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.409995 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" podStartSLOduration=3.155557701 podStartE2EDuration="14.409957423s" podCreationTimestamp="2026-01-21 10:52:05 +0000 UTC" firstStartedPulling="2026-01-21 10:52:06.805079225 +0000 UTC m=+911.265866823" lastFinishedPulling="2026-01-21 10:52:18.059478947 +0000 UTC m=+922.520266545" observedRunningTime="2026-01-21 10:52:19.405818034 +0000 UTC m=+923.866605652" watchObservedRunningTime="2026-01-21 10:52:19.409957423 +0000 UTC m=+923.870745031" Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.482360 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:19 crc kubenswrapper[4745]: I0121 10:52:19.489180 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mcczs"] Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.007959 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" path="/var/lib/kubelet/pods/8bf4ac89-473c-4d31-902f-a5cfd122edf0/volumes" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.394968 4745 generic.go:334] "Generic (PLEG): container finished" podID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerID="95615665a31bb5d5aa9349c95ea5d346f96814ae0a8094ee01a806dd16483fb6" exitCode=0 Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.395048 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerDied","Data":"95615665a31bb5d5aa9349c95ea5d346f96814ae0a8094ee01a806dd16483fb6"} Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.503704 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l4qmd"] Jan 21 10:52:20 crc kubenswrapper[4745]: E0121 10:52:20.503945 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="extract-utilities" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.503957 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="extract-utilities" Jan 21 10:52:20 crc kubenswrapper[4745]: E0121 10:52:20.503981 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="extract-content" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.503987 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="extract-content" Jan 21 10:52:20 crc kubenswrapper[4745]: E0121 10:52:20.503999 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="registry-server" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.504005 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="registry-server" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.504100 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf4ac89-473c-4d31-902f-a5cfd122edf0" containerName="registry-server" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.504472 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.508582 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z9hc6" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.509709 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.509855 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.526319 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l4qmd"] Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.564683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgdb\" (UniqueName: \"kubernetes.io/projected/fa66bbac-12d5-40aa-b852-00ddac9637a1-kube-api-access-wcgdb\") pod \"openstack-operator-index-l4qmd\" (UID: \"fa66bbac-12d5-40aa-b852-00ddac9637a1\") " pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.666214 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgdb\" (UniqueName: \"kubernetes.io/projected/fa66bbac-12d5-40aa-b852-00ddac9637a1-kube-api-access-wcgdb\") pod \"openstack-operator-index-l4qmd\" (UID: \"fa66bbac-12d5-40aa-b852-00ddac9637a1\") " pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.697244 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgdb\" (UniqueName: \"kubernetes.io/projected/fa66bbac-12d5-40aa-b852-00ddac9637a1-kube-api-access-wcgdb\") pod \"openstack-operator-index-l4qmd\" (UID: \"fa66bbac-12d5-40aa-b852-00ddac9637a1\") " pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:20 crc kubenswrapper[4745]: I0121 10:52:20.817752 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:21 crc kubenswrapper[4745]: I0121 10:52:21.132479 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l4qmd"] Jan 21 10:52:21 crc kubenswrapper[4745]: W0121 10:52:21.140078 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa66bbac_12d5_40aa_b852_00ddac9637a1.slice/crio-8f8f6aab306d2918254ff385c4d61791cf9f4a3bd58c6f6cc0d0f5adcb2bb9ce WatchSource:0}: Error finding container 8f8f6aab306d2918254ff385c4d61791cf9f4a3bd58c6f6cc0d0f5adcb2bb9ce: Status 404 returned error can't find the container with id 8f8f6aab306d2918254ff385c4d61791cf9f4a3bd58c6f6cc0d0f5adcb2bb9ce Jan 21 10:52:21 crc kubenswrapper[4745]: I0121 10:52:21.419573 4745 generic.go:334] "Generic (PLEG): container finished" podID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerID="835bf92bfa865012aa4f01823e7fd2161034836735eb211ece4722cc217c1dcf" exitCode=0 Jan 21 10:52:21 crc kubenswrapper[4745]: I0121 10:52:21.419659 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerDied","Data":"835bf92bfa865012aa4f01823e7fd2161034836735eb211ece4722cc217c1dcf"} Jan 21 10:52:21 crc kubenswrapper[4745]: I0121 10:52:21.420699 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l4qmd" event={"ID":"fa66bbac-12d5-40aa-b852-00ddac9637a1","Type":"ContainerStarted","Data":"8f8f6aab306d2918254ff385c4d61791cf9f4a3bd58c6f6cc0d0f5adcb2bb9ce"} Jan 21 10:52:22 crc kubenswrapper[4745]: I0121 10:52:22.446916 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"d5d678eba980f7fac84a09c1363db854a0082852af0f1ed66eceed83757b7110"} Jan 21 10:52:22 crc kubenswrapper[4745]: I0121 10:52:22.447628 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"68db6eb95e117c87e9c03c77163cbf4f7bdfb52394d64f648bad042fcb52da21"} Jan 21 10:52:22 crc kubenswrapper[4745]: I0121 10:52:22.447646 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"0363b8cba4daaeb80c1d9f899f63f5107385a7583ad62b9ba033e4f8b8113153"} Jan 21 10:52:22 crc kubenswrapper[4745]: I0121 10:52:22.447656 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"7808942f2dcc761fba2a452b198fb137a5d940b68ebd758aec9a6557384ff784"} Jan 21 10:52:22 crc kubenswrapper[4745]: I0121 10:52:22.447668 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"32d868bfb4b22c94f6eb1c8029303df5a5d635c61a306b2b9ac87f6c0f9978bc"} Jan 21 10:52:23 crc kubenswrapper[4745]: I0121 10:52:23.461773 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9f9vp" event={"ID":"db2f79cd-c6c7-459f-bf98-002583ba5ddd","Type":"ContainerStarted","Data":"7038b3556d32d1aa339886154aa607744e9efb000e2059e389cb75ba09d1fba9"} Jan 21 10:52:23 crc kubenswrapper[4745]: I0121 10:52:23.462198 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:23 crc kubenswrapper[4745]: I0121 10:52:23.496759 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9f9vp" podStartSLOduration=6.624037684 podStartE2EDuration="18.496741925s" podCreationTimestamp="2026-01-21 10:52:05 +0000 UTC" firstStartedPulling="2026-01-21 10:52:06.17739342 +0000 UTC m=+910.638181018" lastFinishedPulling="2026-01-21 10:52:18.050097661 +0000 UTC m=+922.510885259" observedRunningTime="2026-01-21 10:52:23.494286122 +0000 UTC m=+927.955073730" watchObservedRunningTime="2026-01-21 10:52:23.496741925 +0000 UTC m=+927.957529523" Jan 21 10:52:24 crc kubenswrapper[4745]: I0121 10:52:24.469187 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l4qmd" event={"ID":"fa66bbac-12d5-40aa-b852-00ddac9637a1","Type":"ContainerStarted","Data":"b91b5ffaddf125d21af382e1dea61d4caefe92404dc812adc0341ccbdf35ffc8"} Jan 21 10:52:24 crc kubenswrapper[4745]: I0121 10:52:24.502871 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l4qmd" podStartSLOduration=2.119266834 podStartE2EDuration="4.502855297s" podCreationTimestamp="2026-01-21 10:52:20 +0000 UTC" firstStartedPulling="2026-01-21 10:52:21.142999603 +0000 UTC m=+925.603787201" lastFinishedPulling="2026-01-21 10:52:23.526588066 +0000 UTC m=+927.987375664" observedRunningTime="2026-01-21 10:52:24.498584635 +0000 UTC m=+928.959372233" watchObservedRunningTime="2026-01-21 10:52:24.502855297 +0000 UTC m=+928.963642885" Jan 21 10:52:25 crc kubenswrapper[4745]: I0121 10:52:25.889111 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:25 crc kubenswrapper[4745]: I0121 10:52:25.930357 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:30 crc kubenswrapper[4745]: I0121 10:52:30.817981 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:30 crc kubenswrapper[4745]: I0121 10:52:30.818644 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:30 crc kubenswrapper[4745]: I0121 10:52:30.851087 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:31 crc kubenswrapper[4745]: I0121 10:52:31.550604 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l4qmd" Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.758080 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2"] Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.759827 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.764897 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-4j2tn" Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.783251 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2"] Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.941399 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znj6j\" (UniqueName: \"kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.941776 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:32 crc kubenswrapper[4745]: I0121 10:52:32.941926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.042957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.043329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.043469 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znj6j\" (UniqueName: \"kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.043701 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.043474 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.073549 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znj6j\" (UniqueName: \"kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j\") pod \"78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.073947 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:33 crc kubenswrapper[4745]: I0121 10:52:33.519102 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2"] Jan 21 10:52:33 crc kubenswrapper[4745]: W0121 10:52:33.526853 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode386ddd7_8bcd_4130_b5f8_1ec63b3c515a.slice/crio-ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e WatchSource:0}: Error finding container ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e: Status 404 returned error can't find the container with id ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e Jan 21 10:52:34 crc kubenswrapper[4745]: I0121 10:52:34.561247 4745 generic.go:334] "Generic (PLEG): container finished" podID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerID="56e2aa7f73b97e11ab1924e920ae3f537270a8a85efa74b6616080db3d2be6d9" exitCode=0 Jan 21 10:52:34 crc kubenswrapper[4745]: I0121 10:52:34.561499 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" event={"ID":"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a","Type":"ContainerDied","Data":"56e2aa7f73b97e11ab1924e920ae3f537270a8a85efa74b6616080db3d2be6d9"} Jan 21 10:52:34 crc kubenswrapper[4745]: I0121 10:52:34.561575 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" event={"ID":"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a","Type":"ContainerStarted","Data":"ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e"} Jan 21 10:52:35 crc kubenswrapper[4745]: I0121 10:52:35.571707 4745 generic.go:334] "Generic (PLEG): container finished" podID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerID="328f16c30c5ebc7bb145a4fd9c69cce84020efd566733bf8c496bf59038200a6" exitCode=0 Jan 21 10:52:35 crc kubenswrapper[4745]: I0121 10:52:35.571758 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" event={"ID":"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a","Type":"ContainerDied","Data":"328f16c30c5ebc7bb145a4fd9c69cce84020efd566733bf8c496bf59038200a6"} Jan 21 10:52:35 crc kubenswrapper[4745]: I0121 10:52:35.899831 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9f9vp" Jan 21 10:52:36 crc kubenswrapper[4745]: I0121 10:52:36.523714 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-dq466" Jan 21 10:52:36 crc kubenswrapper[4745]: I0121 10:52:36.583687 4745 generic.go:334] "Generic (PLEG): container finished" podID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerID="e161ae7de1ddc058035f5289d6388c9fd6a237bd78595ecdc37ccedbf6d316b7" exitCode=0 Jan 21 10:52:36 crc kubenswrapper[4745]: I0121 10:52:36.583748 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" event={"ID":"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a","Type":"ContainerDied","Data":"e161ae7de1ddc058035f5289d6388c9fd6a237bd78595ecdc37ccedbf6d316b7"} Jan 21 10:52:37 crc kubenswrapper[4745]: I0121 10:52:37.893835 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.013800 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle\") pod \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.014858 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle" (OuterVolumeSpecName: "bundle") pod "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" (UID: "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.015048 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util\") pod \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.015714 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znj6j\" (UniqueName: \"kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j\") pod \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\" (UID: \"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a\") " Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.016370 4745 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.031507 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util" (OuterVolumeSpecName: "util") pod "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" (UID: "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.036610 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j" (OuterVolumeSpecName: "kube-api-access-znj6j") pod "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" (UID: "e386ddd7-8bcd-4130-b5f8-1ec63b3c515a"). InnerVolumeSpecName "kube-api-access-znj6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.118171 4745 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-util\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.118235 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znj6j\" (UniqueName: \"kubernetes.io/projected/e386ddd7-8bcd-4130-b5f8-1ec63b3c515a-kube-api-access-znj6j\") on node \"crc\" DevicePath \"\"" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.615692 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" event={"ID":"e386ddd7-8bcd-4130-b5f8-1ec63b3c515a","Type":"ContainerDied","Data":"ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e"} Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.616321 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceef9f70298679dbb45790ed357447a8030a9a3f5443cf7b6cacbcfce7c4698e" Jan 21 10:52:38 crc kubenswrapper[4745]: I0121 10:52:38.616100 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.429195 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v"] Jan 21 10:52:45 crc kubenswrapper[4745]: E0121 10:52:45.429570 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="util" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.429587 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="util" Jan 21 10:52:45 crc kubenswrapper[4745]: E0121 10:52:45.429600 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="extract" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.429610 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="extract" Jan 21 10:52:45 crc kubenswrapper[4745]: E0121 10:52:45.429626 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="pull" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.429635 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="pull" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.429818 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e386ddd7-8bcd-4130-b5f8-1ec63b3c515a" containerName="extract" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.430387 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.432675 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-xm5lz" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.475166 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v"] Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.523961 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlv45\" (UniqueName: \"kubernetes.io/projected/8381ff45-ae46-437a-894e-1530d39397f8-kube-api-access-zlv45\") pod \"openstack-operator-controller-init-777994b6d8-xpq4v\" (UID: \"8381ff45-ae46-437a-894e-1530d39397f8\") " pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.625431 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlv45\" (UniqueName: \"kubernetes.io/projected/8381ff45-ae46-437a-894e-1530d39397f8-kube-api-access-zlv45\") pod \"openstack-operator-controller-init-777994b6d8-xpq4v\" (UID: \"8381ff45-ae46-437a-894e-1530d39397f8\") " pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.647987 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlv45\" (UniqueName: \"kubernetes.io/projected/8381ff45-ae46-437a-894e-1530d39397f8-kube-api-access-zlv45\") pod \"openstack-operator-controller-init-777994b6d8-xpq4v\" (UID: \"8381ff45-ae46-437a-894e-1530d39397f8\") " pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:45 crc kubenswrapper[4745]: I0121 10:52:45.785404 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:46 crc kubenswrapper[4745]: I0121 10:52:46.031993 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v"] Jan 21 10:52:46 crc kubenswrapper[4745]: I0121 10:52:46.674308 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" event={"ID":"8381ff45-ae46-437a-894e-1530d39397f8","Type":"ContainerStarted","Data":"0c17ab2419e01b5b5e87734129b556a6297e7abd74d36a87815524fd3beadda1"} Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.618473 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.622052 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.697216 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8ks\" (UniqueName: \"kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.697275 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.697304 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.704787 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.798276 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8ks\" (UniqueName: \"kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.798339 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.798358 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.798829 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.798965 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.816923 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8ks\" (UniqueName: \"kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks\") pod \"certified-operators-n8v4b\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:53 crc kubenswrapper[4745]: I0121 10:52:53.972028 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:52:55 crc kubenswrapper[4745]: I0121 10:52:55.883450 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:52:55 crc kubenswrapper[4745]: W0121 10:52:55.887669 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3d14220_b0eb_46a7_8ae4_6204b1d3b29d.slice/crio-a01d229726b16edeac4598ab4a9ac15abede29ea1d0d89f7d4077020b87a27b4 WatchSource:0}: Error finding container a01d229726b16edeac4598ab4a9ac15abede29ea1d0d89f7d4077020b87a27b4: Status 404 returned error can't find the container with id a01d229726b16edeac4598ab4a9ac15abede29ea1d0d89f7d4077020b87a27b4 Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.762926 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" event={"ID":"8381ff45-ae46-437a-894e-1530d39397f8","Type":"ContainerStarted","Data":"cd1d89d364601273fd72b3c29f9437239504fefaeaf493423e0468bab3f5325a"} Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.764299 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.765860 4745 generic.go:334] "Generic (PLEG): container finished" podID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerID="a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e" exitCode=0 Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.765893 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerDied","Data":"a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e"} Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.765913 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerStarted","Data":"a01d229726b16edeac4598ab4a9ac15abede29ea1d0d89f7d4077020b87a27b4"} Jan 21 10:52:56 crc kubenswrapper[4745]: I0121 10:52:56.797386 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" podStartSLOduration=2.120539311 podStartE2EDuration="11.7973692s" podCreationTimestamp="2026-01-21 10:52:45 +0000 UTC" firstStartedPulling="2026-01-21 10:52:46.046437172 +0000 UTC m=+950.507224770" lastFinishedPulling="2026-01-21 10:52:55.723267061 +0000 UTC m=+960.184054659" observedRunningTime="2026-01-21 10:52:56.79317826 +0000 UTC m=+961.253965868" watchObservedRunningTime="2026-01-21 10:52:56.7973692 +0000 UTC m=+961.258156798" Jan 21 10:52:57 crc kubenswrapper[4745]: I0121 10:52:57.774443 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerStarted","Data":"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7"} Jan 21 10:52:58 crc kubenswrapper[4745]: I0121 10:52:58.782141 4745 generic.go:334] "Generic (PLEG): container finished" podID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerID="aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7" exitCode=0 Jan 21 10:52:58 crc kubenswrapper[4745]: I0121 10:52:58.782296 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerDied","Data":"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7"} Jan 21 10:52:59 crc kubenswrapper[4745]: I0121 10:52:59.800991 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerStarted","Data":"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d"} Jan 21 10:52:59 crc kubenswrapper[4745]: I0121 10:52:59.819945 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-n8v4b" podStartSLOduration=4.39158634 podStartE2EDuration="6.819927633s" podCreationTimestamp="2026-01-21 10:52:53 +0000 UTC" firstStartedPulling="2026-01-21 10:52:56.767037896 +0000 UTC m=+961.227825515" lastFinishedPulling="2026-01-21 10:52:59.19537921 +0000 UTC m=+963.656166808" observedRunningTime="2026-01-21 10:52:59.81949337 +0000 UTC m=+964.280280988" watchObservedRunningTime="2026-01-21 10:52:59.819927633 +0000 UTC m=+964.280715241" Jan 21 10:53:03 crc kubenswrapper[4745]: I0121 10:53:03.973049 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:03 crc kubenswrapper[4745]: I0121 10:53:03.974009 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:04 crc kubenswrapper[4745]: I0121 10:53:04.023732 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:04 crc kubenswrapper[4745]: I0121 10:53:04.874406 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:04 crc kubenswrapper[4745]: I0121 10:53:04.926991 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:53:05 crc kubenswrapper[4745]: I0121 10:53:05.790185 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" Jan 21 10:53:06 crc kubenswrapper[4745]: I0121 10:53:06.842795 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-n8v4b" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="registry-server" containerID="cri-o://67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d" gracePeriod=2 Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.799567 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.852591 4745 generic.go:334] "Generic (PLEG): container finished" podID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerID="67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d" exitCode=0 Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.852655 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-n8v4b" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.852656 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerDied","Data":"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d"} Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.852724 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-n8v4b" event={"ID":"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d","Type":"ContainerDied","Data":"a01d229726b16edeac4598ab4a9ac15abede29ea1d0d89f7d4077020b87a27b4"} Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.852750 4745 scope.go:117] "RemoveContainer" containerID="67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.873147 4745 scope.go:117] "RemoveContainer" containerID="aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.898085 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content\") pod \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.898196 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mx8ks\" (UniqueName: \"kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks\") pod \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.898260 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities\") pod \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\" (UID: \"c3d14220-b0eb-46a7-8ae4-6204b1d3b29d\") " Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.899390 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities" (OuterVolumeSpecName: "utilities") pod "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" (UID: "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.906736 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks" (OuterVolumeSpecName: "kube-api-access-mx8ks") pod "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" (UID: "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d"). InnerVolumeSpecName "kube-api-access-mx8ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.911849 4745 scope.go:117] "RemoveContainer" containerID="a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.936633 4745 scope.go:117] "RemoveContainer" containerID="67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d" Jan 21 10:53:07 crc kubenswrapper[4745]: E0121 10:53:07.937085 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d\": container with ID starting with 67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d not found: ID does not exist" containerID="67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.937136 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d"} err="failed to get container status \"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d\": rpc error: code = NotFound desc = could not find container \"67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d\": container with ID starting with 67902dda048728d88f952f9412bb196628f4756850f948cf84407bf0ba6cfd0d not found: ID does not exist" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.937162 4745 scope.go:117] "RemoveContainer" containerID="aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7" Jan 21 10:53:07 crc kubenswrapper[4745]: E0121 10:53:07.937584 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7\": container with ID starting with aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7 not found: ID does not exist" containerID="aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.937643 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7"} err="failed to get container status \"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7\": rpc error: code = NotFound desc = could not find container \"aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7\": container with ID starting with aed4d787e69ee94976941758b0d1fab0581719f32206aac451c1d6ca0a3483e7 not found: ID does not exist" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.937676 4745 scope.go:117] "RemoveContainer" containerID="a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e" Jan 21 10:53:07 crc kubenswrapper[4745]: E0121 10:53:07.938189 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e\": container with ID starting with a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e not found: ID does not exist" containerID="a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.938217 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e"} err="failed to get container status \"a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e\": rpc error: code = NotFound desc = could not find container \"a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e\": container with ID starting with a62fe821eb55b92e0d0237a4bc71ab8858d1645dfd78995af79b58f0f5423c3e not found: ID does not exist" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.951353 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" (UID: "c3d14220-b0eb-46a7-8ae4-6204b1d3b29d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.999447 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.999483 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mx8ks\" (UniqueName: \"kubernetes.io/projected/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-kube-api-access-mx8ks\") on node \"crc\" DevicePath \"\"" Jan 21 10:53:07 crc kubenswrapper[4745]: I0121 10:53:07.999493 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:53:08 crc kubenswrapper[4745]: I0121 10:53:08.170995 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:53:08 crc kubenswrapper[4745]: I0121 10:53:08.175371 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-n8v4b"] Jan 21 10:53:10 crc kubenswrapper[4745]: I0121 10:53:10.007405 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" path="/var/lib/kubelet/pods/c3d14220-b0eb-46a7-8ae4-6204b1d3b29d/volumes" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.710682 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj"] Jan 21 10:53:24 crc kubenswrapper[4745]: E0121 10:53:24.711595 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="extract-utilities" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.711610 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="extract-utilities" Jan 21 10:53:24 crc kubenswrapper[4745]: E0121 10:53:24.711625 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="registry-server" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.711631 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="registry-server" Jan 21 10:53:24 crc kubenswrapper[4745]: E0121 10:53:24.711641 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="extract-content" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.711647 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="extract-content" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.711753 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3d14220-b0eb-46a7-8ae4-6204b1d3b29d" containerName="registry-server" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.712181 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.717101 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-ljmbt" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.727656 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.728819 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.731061 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.732481 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-64hn6" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.826566 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpzpx\" (UniqueName: \"kubernetes.io/projected/d9337025-a702-4dd2-b8a4-e807525a34f5-kube-api-access-vpzpx\") pod \"cinder-operator-controller-manager-9b68f5989-qcrlk\" (UID: \"d9337025-a702-4dd2-b8a4-e807525a34f5\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.826634 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xmmh\" (UniqueName: \"kubernetes.io/projected/f99a5f65-e2aa-4476-b4c6-6566761f1ad2-kube-api-access-5xmmh\") pod \"barbican-operator-controller-manager-7ddb5c749-bqhjj\" (UID: \"f99a5f65-e2aa-4476-b4c6-6566761f1ad2\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.838064 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.881421 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.882163 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.888336 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-j9fcz" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.893407 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gntws"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.895232 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.902514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-vd5fl" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.906692 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.915946 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gntws"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.928025 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpzpx\" (UniqueName: \"kubernetes.io/projected/d9337025-a702-4dd2-b8a4-e807525a34f5-kube-api-access-vpzpx\") pod \"cinder-operator-controller-manager-9b68f5989-qcrlk\" (UID: \"d9337025-a702-4dd2-b8a4-e807525a34f5\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.928327 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmmh\" (UniqueName: \"kubernetes.io/projected/f99a5f65-e2aa-4476-b4c6-6566761f1ad2-kube-api-access-5xmmh\") pod \"barbican-operator-controller-manager-7ddb5c749-bqhjj\" (UID: \"f99a5f65-e2aa-4476-b4c6-6566761f1ad2\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.955221 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmmh\" (UniqueName: \"kubernetes.io/projected/f99a5f65-e2aa-4476-b4c6-6566761f1ad2-kube-api-access-5xmmh\") pod \"barbican-operator-controller-manager-7ddb5c749-bqhjj\" (UID: \"f99a5f65-e2aa-4476-b4c6-6566761f1ad2\") " pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.969626 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.970803 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpzpx\" (UniqueName: \"kubernetes.io/projected/d9337025-a702-4dd2-b8a4-e807525a34f5-kube-api-access-vpzpx\") pod \"cinder-operator-controller-manager-9b68f5989-qcrlk\" (UID: \"d9337025-a702-4dd2-b8a4-e807525a34f5\") " pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.981162 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft"] Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.981936 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.982892 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.986881 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-htn7x" Jan 21 10:53:24 crc kubenswrapper[4745]: I0121 10:53:24.992514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x75ls" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.035799 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.035793 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn2mg\" (UniqueName: \"kubernetes.io/projected/bc9be084-edd6-4556-88af-354f416d451c-kube-api-access-mn2mg\") pod \"designate-operator-controller-manager-9f958b845-hw9zg\" (UID: \"bc9be084-edd6-4556-88af-354f416d451c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.053550 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.077467 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkg4\" (UniqueName: \"kubernetes.io/projected/9ff19137-02fd-4de1-9601-95a5c0fbbed0-kube-api-access-8jkg4\") pod \"glance-operator-controller-manager-c6994669c-gntws\" (UID: \"9ff19137-02fd-4de1-9601-95a5c0fbbed0\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.078106 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.096645 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.097450 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.101948 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.109631 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.117846 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-zkbs7" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.153748 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.166559 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.167811 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.177965 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179230 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179273 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-n5g5p" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179597 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn2mg\" (UniqueName: \"kubernetes.io/projected/bc9be084-edd6-4556-88af-354f416d451c-kube-api-access-mn2mg\") pod \"designate-operator-controller-manager-9f958b845-hw9zg\" (UID: \"bc9be084-edd6-4556-88af-354f416d451c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179735 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8fl\" (UniqueName: \"kubernetes.io/projected/784904b1-a1d9-4319-be67-34e3dfdc1c9a-kube-api-access-bk8fl\") pod \"horizon-operator-controller-manager-77d5c5b54f-sqhft\" (UID: \"784904b1-a1d9-4319-be67-34e3dfdc1c9a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkg4\" (UniqueName: \"kubernetes.io/projected/9ff19137-02fd-4de1-9601-95a5c0fbbed0-kube-api-access-8jkg4\") pod \"glance-operator-controller-manager-c6994669c-gntws\" (UID: \"9ff19137-02fd-4de1-9601-95a5c0fbbed0\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.179959 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wkmf\" (UniqueName: \"kubernetes.io/projected/2528950f-ec80-4609-a77c-d6fbb2768e3b-kube-api-access-2wkmf\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.180057 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn682\" (UniqueName: \"kubernetes.io/projected/fb04ba1c-d6a0-40aa-b985-f4715cb11257-kube-api-access-tn682\") pod \"keystone-operator-controller-manager-767fdc4f47-fh7ts\" (UID: \"fb04ba1c-d6a0-40aa-b985-f4715cb11257\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.180247 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.181072 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc7pf\" (UniqueName: \"kubernetes.io/projected/b28edf64-70dc-4fc2-8d7f-c1f141cbd31e-kube-api-access-lc7pf\") pod \"heat-operator-controller-manager-594c8c9d5d-g4gpj\" (UID: \"b28edf64-70dc-4fc2-8d7f-c1f141cbd31e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.193571 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6rjqw" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.195869 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.197254 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.209046 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-jcw78" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.228681 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.238194 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkg4\" (UniqueName: \"kubernetes.io/projected/9ff19137-02fd-4de1-9601-95a5c0fbbed0-kube-api-access-8jkg4\") pod \"glance-operator-controller-manager-c6994669c-gntws\" (UID: \"9ff19137-02fd-4de1-9601-95a5c0fbbed0\") " pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.238633 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.251699 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.253769 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.259655 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.272126 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn2mg\" (UniqueName: \"kubernetes.io/projected/bc9be084-edd6-4556-88af-354f416d451c-kube-api-access-mn2mg\") pod \"designate-operator-controller-manager-9f958b845-hw9zg\" (UID: \"bc9be084-edd6-4556-88af-354f416d451c\") " pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.272410 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-84ptk" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.272928 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.273737 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.281128 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wx75z" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.281430 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283255 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283454 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc7pf\" (UniqueName: \"kubernetes.io/projected/b28edf64-70dc-4fc2-8d7f-c1f141cbd31e-kube-api-access-lc7pf\") pod \"heat-operator-controller-manager-594c8c9d5d-g4gpj\" (UID: \"b28edf64-70dc-4fc2-8d7f-c1f141cbd31e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283586 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8fl\" (UniqueName: \"kubernetes.io/projected/784904b1-a1d9-4319-be67-34e3dfdc1c9a-kube-api-access-bk8fl\") pod \"horizon-operator-controller-manager-77d5c5b54f-sqhft\" (UID: \"784904b1-a1d9-4319-be67-34e3dfdc1c9a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283703 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czwn8\" (UniqueName: \"kubernetes.io/projected/c0985a55-6ede-4214-87fe-27cb5668dd86-kube-api-access-czwn8\") pod \"mariadb-operator-controller-manager-c87fff755-8xm9d\" (UID: \"c0985a55-6ede-4214-87fe-27cb5668dd86\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283795 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wkmf\" (UniqueName: \"kubernetes.io/projected/2528950f-ec80-4609-a77c-d6fbb2768e3b-kube-api-access-2wkmf\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283886 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn68s\" (UniqueName: \"kubernetes.io/projected/42c37f0d-415a-4a72-ae98-07551477c6cc-kube-api-access-mn68s\") pod \"neutron-operator-controller-manager-cb4666565-x9mpf\" (UID: \"42c37f0d-415a-4a72-ae98-07551477c6cc\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.283967 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn682\" (UniqueName: \"kubernetes.io/projected/fb04ba1c-d6a0-40aa-b985-f4715cb11257-kube-api-access-tn682\") pod \"keystone-operator-controller-manager-767fdc4f47-fh7ts\" (UID: \"fb04ba1c-d6a0-40aa-b985-f4715cb11257\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.284066 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22lzp\" (UniqueName: \"kubernetes.io/projected/2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462-kube-api-access-22lzp\") pod \"ironic-operator-controller-manager-78757b4889-clbcs\" (UID: \"2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.284495 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5l9n7\" (UniqueName: \"kubernetes.io/projected/dfb1f262-fe24-45bf-8f75-0e2a81989f3f-kube-api-access-5l9n7\") pod \"manila-operator-controller-manager-864f6b75bf-dvhql\" (UID: \"dfb1f262-fe24-45bf-8f75-0e2a81989f3f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.284302 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.284854 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert podName:2528950f-ec80-4609-a77c-d6fbb2768e3b nodeName:}" failed. No retries permitted until 2026-01-21 10:53:25.784834102 +0000 UTC m=+990.245621700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert") pod "infra-operator-controller-manager-77c48c7859-4nt9f" (UID: "2528950f-ec80-4609-a77c-d6fbb2768e3b") : secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.309510 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wkmf\" (UniqueName: \"kubernetes.io/projected/2528950f-ec80-4609-a77c-d6fbb2768e3b-kube-api-access-2wkmf\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.316619 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc7pf\" (UniqueName: \"kubernetes.io/projected/b28edf64-70dc-4fc2-8d7f-c1f141cbd31e-kube-api-access-lc7pf\") pod \"heat-operator-controller-manager-594c8c9d5d-g4gpj\" (UID: \"b28edf64-70dc-4fc2-8d7f-c1f141cbd31e\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.333843 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.335018 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.341660 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-zw6tj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.343190 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8fl\" (UniqueName: \"kubernetes.io/projected/784904b1-a1d9-4319-be67-34e3dfdc1c9a-kube-api-access-bk8fl\") pod \"horizon-operator-controller-manager-77d5c5b54f-sqhft\" (UID: \"784904b1-a1d9-4319-be67-34e3dfdc1c9a\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.349460 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.350326 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn682\" (UniqueName: \"kubernetes.io/projected/fb04ba1c-d6a0-40aa-b985-f4715cb11257-kube-api-access-tn682\") pod \"keystone-operator-controller-manager-767fdc4f47-fh7ts\" (UID: \"fb04ba1c-d6a0-40aa-b985-f4715cb11257\") " pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.394724 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czwn8\" (UniqueName: \"kubernetes.io/projected/c0985a55-6ede-4214-87fe-27cb5668dd86-kube-api-access-czwn8\") pod \"mariadb-operator-controller-manager-c87fff755-8xm9d\" (UID: \"c0985a55-6ede-4214-87fe-27cb5668dd86\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.394989 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mn68s\" (UniqueName: \"kubernetes.io/projected/42c37f0d-415a-4a72-ae98-07551477c6cc-kube-api-access-mn68s\") pod \"neutron-operator-controller-manager-cb4666565-x9mpf\" (UID: \"42c37f0d-415a-4a72-ae98-07551477c6cc\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.395091 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22lzp\" (UniqueName: \"kubernetes.io/projected/2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462-kube-api-access-22lzp\") pod \"ironic-operator-controller-manager-78757b4889-clbcs\" (UID: \"2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.395206 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5l9n7\" (UniqueName: \"kubernetes.io/projected/dfb1f262-fe24-45bf-8f75-0e2a81989f3f-kube-api-access-5l9n7\") pod \"manila-operator-controller-manager-864f6b75bf-dvhql\" (UID: \"dfb1f262-fe24-45bf-8f75-0e2a81989f3f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.395307 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48xrz\" (UniqueName: \"kubernetes.io/projected/a96f3189-7bbc-404d-ad6d-05b8fefb65fc-kube-api-access-48xrz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-bx656\" (UID: \"a96f3189-7bbc-404d-ad6d-05b8fefb65fc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.396120 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.396975 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.397236 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.409681 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2h7cn" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.422447 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.439219 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn68s\" (UniqueName: \"kubernetes.io/projected/42c37f0d-415a-4a72-ae98-07551477c6cc-kube-api-access-mn68s\") pod \"neutron-operator-controller-manager-cb4666565-x9mpf\" (UID: \"42c37f0d-415a-4a72-ae98-07551477c6cc\") " pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.440495 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5l9n7\" (UniqueName: \"kubernetes.io/projected/dfb1f262-fe24-45bf-8f75-0e2a81989f3f-kube-api-access-5l9n7\") pod \"manila-operator-controller-manager-864f6b75bf-dvhql\" (UID: \"dfb1f262-fe24-45bf-8f75-0e2a81989f3f\") " pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.449163 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.482293 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.483216 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czwn8\" (UniqueName: \"kubernetes.io/projected/c0985a55-6ede-4214-87fe-27cb5668dd86-kube-api-access-czwn8\") pod \"mariadb-operator-controller-manager-c87fff755-8xm9d\" (UID: \"c0985a55-6ede-4214-87fe-27cb5668dd86\") " pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.493313 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.493872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22lzp\" (UniqueName: \"kubernetes.io/projected/2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462-kube-api-access-22lzp\") pod \"ironic-operator-controller-manager-78757b4889-clbcs\" (UID: \"2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462\") " pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.496455 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48xrz\" (UniqueName: \"kubernetes.io/projected/a96f3189-7bbc-404d-ad6d-05b8fefb65fc-kube-api-access-48xrz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-bx656\" (UID: \"a96f3189-7bbc-404d-ad6d-05b8fefb65fc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.496985 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tgdzr" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.500515 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.501349 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.515754 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.533896 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.536676 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48xrz\" (UniqueName: \"kubernetes.io/projected/a96f3189-7bbc-404d-ad6d-05b8fefb65fc-kube-api-access-48xrz\") pod \"octavia-operator-controller-manager-7fc9b76cf6-bx656\" (UID: \"a96f3189-7bbc-404d-ad6d-05b8fefb65fc\") " pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.554624 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.596662 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.597394 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldqln\" (UniqueName: \"kubernetes.io/projected/1f562ebe-222a-441b-9277-0aa69a0c0fb3-kube-api-access-ldqln\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.597436 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9kfg\" (UniqueName: \"kubernetes.io/projected/be658ac1-07b6-482b-8b99-35a75fcf3b50-kube-api-access-k9kfg\") pod \"nova-operator-controller-manager-65849867d6-g8j7m\" (UID: \"be658ac1-07b6-482b-8b99-35a75fcf3b50\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.597483 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.606283 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.606613 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.616412 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.617451 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.622712 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-lc9f6" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.651841 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.663619 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.680448 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.698564 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldqln\" (UniqueName: \"kubernetes.io/projected/1f562ebe-222a-441b-9277-0aa69a0c0fb3-kube-api-access-ldqln\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.698611 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9kfg\" (UniqueName: \"kubernetes.io/projected/be658ac1-07b6-482b-8b99-35a75fcf3b50-kube-api-access-k9kfg\") pod \"nova-operator-controller-manager-65849867d6-g8j7m\" (UID: \"be658ac1-07b6-482b-8b99-35a75fcf3b50\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.698648 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.698817 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.698861 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:26.198847317 +0000 UTC m=+990.659634905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.699522 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.700832 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.702634 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-6h6rf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.723571 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.726689 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.741007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9kfg\" (UniqueName: \"kubernetes.io/projected/be658ac1-07b6-482b-8b99-35a75fcf3b50-kube-api-access-k9kfg\") pod \"nova-operator-controller-manager-65849867d6-g8j7m\" (UID: \"be658ac1-07b6-482b-8b99-35a75fcf3b50\") " pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.750129 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-2tfjm" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.751197 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.772712 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldqln\" (UniqueName: \"kubernetes.io/projected/1f562ebe-222a-441b-9277-0aa69a0c0fb3-kube-api-access-ldqln\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.798573 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.800391 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.800441 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxsh4\" (UniqueName: \"kubernetes.io/projected/a292ef63-66c6-4416-8212-7b06a9bb8761-kube-api-access-nxsh4\") pod \"ovn-operator-controller-manager-55db956ddc-j96sf\" (UID: \"a292ef63-66c6-4416-8212-7b06a9bb8761\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.801083 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: E0121 10:53:25.815552 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert podName:2528950f-ec80-4609-a77c-d6fbb2768e3b nodeName:}" failed. No retries permitted until 2026-01-21 10:53:26.815501898 +0000 UTC m=+991.276289496 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert") pod "infra-operator-controller-manager-77c48c7859-4nt9f" (UID: "2528950f-ec80-4609-a77c-d6fbb2768e3b") : secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.826804 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.846777 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.854623 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.855586 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.863198 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8mgw8" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.864743 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.865510 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.867149 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-728pn" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.882702 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.883633 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.897374 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-dc2m7" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.907617 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922193 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8m2z\" (UniqueName: \"kubernetes.io/projected/ab348be4-f24d-41f5-947a-7f49dc330aa9-kube-api-access-c8m2z\") pod \"placement-operator-controller-manager-686df47fcb-8v4t6\" (UID: \"ab348be4-f24d-41f5-947a-7f49dc330aa9\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922252 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5p8\" (UniqueName: \"kubernetes.io/projected/dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19-kube-api-access-qd5p8\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dh2t4\" (UID: \"dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922297 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t595n\" (UniqueName: \"kubernetes.io/projected/57b58631-9efc-4cdb-bb89-47aa70a6bd98-kube-api-access-t595n\") pod \"swift-operator-controller-manager-85dd56d4cc-46lz5\" (UID: \"57b58631-9efc-4cdb-bb89-47aa70a6bd98\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922318 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxsh4\" (UniqueName: \"kubernetes.io/projected/a292ef63-66c6-4416-8212-7b06a9bb8761-kube-api-access-nxsh4\") pod \"ovn-operator-controller-manager-55db956ddc-j96sf\" (UID: \"a292ef63-66c6-4416-8212-7b06a9bb8761\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgkwm\" (UniqueName: \"kubernetes.io/projected/10226f41-eb60-45bf-a116-c51f3de0ea39-kube-api-access-rgkwm\") pod \"test-operator-controller-manager-7cd8bc9dbb-q4ccb\" (UID: \"10226f41-eb60-45bf-a116-c51f3de0ea39\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.922440 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqxx\" (UniqueName: \"kubernetes.io/projected/94d1ae33-41a7-414c-b0d9-cc843ca9fa47-kube-api-access-jhqxx\") pod \"watcher-operator-controller-manager-64cd966744-bg5mt\" (UID: \"94d1ae33-41a7-414c-b0d9-cc843ca9fa47\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.937725 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.966608 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4"] Jan 21 10:53:25 crc kubenswrapper[4745]: I0121 10:53:25.997189 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxsh4\" (UniqueName: \"kubernetes.io/projected/a292ef63-66c6-4416-8212-7b06a9bb8761-kube-api-access-nxsh4\") pod \"ovn-operator-controller-manager-55db956ddc-j96sf\" (UID: \"a292ef63-66c6-4416-8212-7b06a9bb8761\") " pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.028735 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8m2z\" (UniqueName: \"kubernetes.io/projected/ab348be4-f24d-41f5-947a-7f49dc330aa9-kube-api-access-c8m2z\") pod \"placement-operator-controller-manager-686df47fcb-8v4t6\" (UID: \"ab348be4-f24d-41f5-947a-7f49dc330aa9\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.029238 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5p8\" (UniqueName: \"kubernetes.io/projected/dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19-kube-api-access-qd5p8\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dh2t4\" (UID: \"dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.029322 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t595n\" (UniqueName: \"kubernetes.io/projected/57b58631-9efc-4cdb-bb89-47aa70a6bd98-kube-api-access-t595n\") pod \"swift-operator-controller-manager-85dd56d4cc-46lz5\" (UID: \"57b58631-9efc-4cdb-bb89-47aa70a6bd98\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.029406 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgkwm\" (UniqueName: \"kubernetes.io/projected/10226f41-eb60-45bf-a116-c51f3de0ea39-kube-api-access-rgkwm\") pod \"test-operator-controller-manager-7cd8bc9dbb-q4ccb\" (UID: \"10226f41-eb60-45bf-a116-c51f3de0ea39\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.029458 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhqxx\" (UniqueName: \"kubernetes.io/projected/94d1ae33-41a7-414c-b0d9-cc843ca9fa47-kube-api-access-jhqxx\") pod \"watcher-operator-controller-manager-64cd966744-bg5mt\" (UID: \"94d1ae33-41a7-414c-b0d9-cc843ca9fa47\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.058782 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll"] Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.059841 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.064501 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.064685 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.064690 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7x4sw" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.086266 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t595n\" (UniqueName: \"kubernetes.io/projected/57b58631-9efc-4cdb-bb89-47aa70a6bd98-kube-api-access-t595n\") pod \"swift-operator-controller-manager-85dd56d4cc-46lz5\" (UID: \"57b58631-9efc-4cdb-bb89-47aa70a6bd98\") " pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.104159 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll"] Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.106899 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgkwm\" (UniqueName: \"kubernetes.io/projected/10226f41-eb60-45bf-a116-c51f3de0ea39-kube-api-access-rgkwm\") pod \"test-operator-controller-manager-7cd8bc9dbb-q4ccb\" (UID: \"10226f41-eb60-45bf-a116-c51f3de0ea39\") " pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.112502 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhqxx\" (UniqueName: \"kubernetes.io/projected/94d1ae33-41a7-414c-b0d9-cc843ca9fa47-kube-api-access-jhqxx\") pod \"watcher-operator-controller-manager-64cd966744-bg5mt\" (UID: \"94d1ae33-41a7-414c-b0d9-cc843ca9fa47\") " pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.131236 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.131273 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7qv\" (UniqueName: \"kubernetes.io/projected/8ed49bb1-d169-4518-b064-3fb35fd1bad0-kube-api-access-2v7qv\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.131308 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.131452 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.159791 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8m2z\" (UniqueName: \"kubernetes.io/projected/ab348be4-f24d-41f5-947a-7f49dc330aa9-kube-api-access-c8m2z\") pod \"placement-operator-controller-manager-686df47fcb-8v4t6\" (UID: \"ab348be4-f24d-41f5-947a-7f49dc330aa9\") " pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.167790 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5p8\" (UniqueName: \"kubernetes.io/projected/dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19-kube-api-access-qd5p8\") pod \"telemetry-operator-controller-manager-5f8f495fcf-dh2t4\" (UID: \"dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19\") " pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.193743 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.194618 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.205384 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.235345 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.235417 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.235474 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.235502 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7qv\" (UniqueName: \"kubernetes.io/projected/8ed49bb1-d169-4518-b064-3fb35fd1bad0-kube-api-access-2v7qv\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236283 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236319 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:26.736304927 +0000 UTC m=+991.197092525 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236359 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236377 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:27.236371218 +0000 UTC m=+991.697158816 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236410 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.236427 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:26.73642125 +0000 UTC m=+991.197208848 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.242594 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8"] Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.243588 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.245069 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.264875 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8"] Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.272278 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-72dqv" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.283473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7qv\" (UniqueName: \"kubernetes.io/projected/8ed49bb1-d169-4518-b064-3fb35fd1bad0-kube-api-access-2v7qv\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.338170 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqkmp\" (UniqueName: \"kubernetes.io/projected/1efe6d30-3c28-4945-8615-49cafec58641-kube-api-access-qqkmp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s8zz8\" (UID: \"1efe6d30-3c28-4945-8615-49cafec58641\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.363291 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.443682 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqkmp\" (UniqueName: \"kubernetes.io/projected/1efe6d30-3c28-4945-8615-49cafec58641-kube-api-access-qqkmp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s8zz8\" (UID: \"1efe6d30-3c28-4945-8615-49cafec58641\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.485813 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqkmp\" (UniqueName: \"kubernetes.io/projected/1efe6d30-3c28-4945-8615-49cafec58641-kube-api-access-qqkmp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-s8zz8\" (UID: \"1efe6d30-3c28-4945-8615-49cafec58641\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.534751 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk"] Jan 21 10:53:26 crc kubenswrapper[4745]: W0121 10:53:26.548696 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9337025_a702_4dd2_b8a4_e807525a34f5.slice/crio-f28b5a11b3829173a536da09a51b16800f248a3aca0e3657f3aaa03c8f0b0106 WatchSource:0}: Error finding container f28b5a11b3829173a536da09a51b16800f248a3aca0e3657f3aaa03c8f0b0106: Status 404 returned error can't find the container with id f28b5a11b3829173a536da09a51b16800f248a3aca0e3657f3aaa03c8f0b0106 Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.616290 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.643876 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj"] Jan 21 10:53:26 crc kubenswrapper[4745]: W0121 10:53:26.689433 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf99a5f65_e2aa_4476_b4c6_6566761f1ad2.slice/crio-4ab47c62975c9ed728a2d26408536cde5d59bb0731b55997214b3c5795fa7d6d WatchSource:0}: Error finding container 4ab47c62975c9ed728a2d26408536cde5d59bb0731b55997214b3c5795fa7d6d: Status 404 returned error can't find the container with id 4ab47c62975c9ed728a2d26408536cde5d59bb0731b55997214b3c5795fa7d6d Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.749568 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.749663 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.749802 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.749862 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:27.749845309 +0000 UTC m=+992.210632907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.750295 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.750354 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:27.750336462 +0000 UTC m=+992.211124060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.855145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.855521 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: E0121 10:53:26.855581 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert podName:2528950f-ec80-4609-a77c-d6fbb2768e3b nodeName:}" failed. No retries permitted until 2026-01-21 10:53:28.855567682 +0000 UTC m=+993.316355280 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert") pod "infra-operator-controller-manager-77c48c7859-4nt9f" (UID: "2528950f-ec80-4609-a77c-d6fbb2768e3b") : secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:26 crc kubenswrapper[4745]: I0121 10:53:26.974188 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.093239 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.104413 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf"] Jan 21 10:53:27 crc kubenswrapper[4745]: W0121 10:53:27.122640 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42c37f0d_415a_4a72_ae98_07551477c6cc.slice/crio-3cc3e944c2ed082788fedbb294ec2a142eae30bbf0099d5510af4b4e42bf93ba WatchSource:0}: Error finding container 3cc3e944c2ed082788fedbb294ec2a142eae30bbf0099d5510af4b4e42bf93ba: Status 404 returned error can't find the container with id 3cc3e944c2ed082788fedbb294ec2a142eae30bbf0099d5510af4b4e42bf93ba Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.151392 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" event={"ID":"d9337025-a702-4dd2-b8a4-e807525a34f5","Type":"ContainerStarted","Data":"f28b5a11b3829173a536da09a51b16800f248a3aca0e3657f3aaa03c8f0b0106"} Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.153982 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" event={"ID":"784904b1-a1d9-4319-be67-34e3dfdc1c9a","Type":"ContainerStarted","Data":"3201b31fcc060a82e8f7a5198c45c62e9354860a32ec7dda05f79a82e9898197"} Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.155679 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" event={"ID":"b28edf64-70dc-4fc2-8d7f-c1f141cbd31e","Type":"ContainerStarted","Data":"9bf142f734af82fe2b8847282b47208270967665acd8d2fb614f37b41ec8dc71"} Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.156659 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" event={"ID":"f99a5f65-e2aa-4476-b4c6-6566761f1ad2","Type":"ContainerStarted","Data":"4ab47c62975c9ed728a2d26408536cde5d59bb0731b55997214b3c5795fa7d6d"} Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.158196 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" event={"ID":"42c37f0d-415a-4a72-ae98-07551477c6cc","Type":"ContainerStarted","Data":"3cc3e944c2ed082788fedbb294ec2a142eae30bbf0099d5510af4b4e42bf93ba"} Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.301726 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.301841 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.302129 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:29.302108994 +0000 UTC m=+993.762896592 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.378967 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.390954 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.535449 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs"] Jan 21 10:53:27 crc kubenswrapper[4745]: W0121 10:53:27.543597 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2134ae1d_74cb_4b1e_a2e7_f9aab5bdc462.slice/crio-f2f102886a65447a64e692a640bff9f37b1acf684688992ad21628fc2ecdd5c1 WatchSource:0}: Error finding container f2f102886a65447a64e692a640bff9f37b1acf684688992ad21628fc2ecdd5c1: Status 404 returned error can't find the container with id f2f102886a65447a64e692a640bff9f37b1acf684688992ad21628fc2ecdd5c1 Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.554979 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.641030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m"] Jan 21 10:53:27 crc kubenswrapper[4745]: W0121 10:53:27.657139 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe658ac1_07b6_482b_8b99_35a75fcf3b50.slice/crio-300508f57fc094a1f5f55e2480c478814b8f0bc051c39fe0c853efeb5125b633 WatchSource:0}: Error finding container 300508f57fc094a1f5f55e2480c478814b8f0bc051c39fe0c853efeb5125b633: Status 404 returned error can't find the container with id 300508f57fc094a1f5f55e2480c478814b8f0bc051c39fe0c853efeb5125b633 Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.808763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.808823 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.808953 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.808997 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:29.808982967 +0000 UTC m=+994.269770565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.809296 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: E0121 10:53:27.809318 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:29.809311305 +0000 UTC m=+994.270098903 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.848374 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.871022 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-c6994669c-gntws"] Jan 21 10:53:27 crc kubenswrapper[4745]: I0121 10:53:27.893705 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf"] Jan 21 10:53:27 crc kubenswrapper[4745]: W0121 10:53:27.928831 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff19137_02fd_4de1_9601_95a5c0fbbed0.slice/crio-aeb1e68d6ff88cb83012e2ff6c1c0c3bbf4b315de88af7fbacf5132c617f19ac WatchSource:0}: Error finding container aeb1e68d6ff88cb83012e2ff6c1c0c3bbf4b315de88af7fbacf5132c617f19ac: Status 404 returned error can't find the container with id aeb1e68d6ff88cb83012e2ff6c1c0c3bbf4b315de88af7fbacf5132c617f19ac Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.003889 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.003966 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.030233 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5"] Jan 21 10:53:28 crc kubenswrapper[4745]: W0121 10:53:28.077171 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc0985a55_6ede_4214_87fe_27cb5668dd86.slice/crio-4a2babacecb85d591437730d153c99f65c58bb5126f5c38140ac743994bf0561 WatchSource:0}: Error finding container 4a2babacecb85d591437730d153c99f65c58bb5126f5c38140ac743994bf0561: Status 404 returned error can't find the container with id 4a2babacecb85d591437730d153c99f65c58bb5126f5c38140ac743994bf0561 Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.158931 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.186750 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.189820 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" event={"ID":"be658ac1-07b6-482b-8b99-35a75fcf3b50","Type":"ContainerStarted","Data":"300508f57fc094a1f5f55e2480c478814b8f0bc051c39fe0c853efeb5125b633"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.198510 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" event={"ID":"bc9be084-edd6-4556-88af-354f416d451c","Type":"ContainerStarted","Data":"19d2358d25a383a67ffa6ecd587a50d3ae5f8031cf02cf5339209b485a973493"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.206348 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" event={"ID":"dfb1f262-fe24-45bf-8f75-0e2a81989f3f","Type":"ContainerStarted","Data":"817c7cf652e8a3e3f6b99554e0894476bf1af102fd54ae45adbc87b98d3064e3"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.216619 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.219143 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" event={"ID":"a96f3189-7bbc-404d-ad6d-05b8fefb65fc","Type":"ContainerStarted","Data":"fdbe083d75d4abdbc13a3c080c970ec68d28567cf7fe3b8c7e1516ae0fa52094"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.268905 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" event={"ID":"2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462","Type":"ContainerStarted","Data":"f2f102886a65447a64e692a640bff9f37b1acf684688992ad21628fc2ecdd5c1"} Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.269126 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8m2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-686df47fcb-8v4t6_openstack-operators(ab348be4-f24d-41f5-947a-7f49dc330aa9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.273345 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" podUID="ab348be4-f24d-41f5-947a-7f49dc330aa9" Jan 21 10:53:28 crc kubenswrapper[4745]: W0121 10:53:28.273609 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1efe6d30_3c28_4945_8615_49cafec58641.slice/crio-6c8c2991592f9fb666ff9a51b66cef6e466abb75ef3afb8985bd3f08100d2eed WatchSource:0}: Error finding container 6c8c2991592f9fb666ff9a51b66cef6e466abb75ef3afb8985bd3f08100d2eed: Status 404 returned error can't find the container with id 6c8c2991592f9fb666ff9a51b66cef6e466abb75ef3afb8985bd3f08100d2eed Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.274379 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8"] Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.278906 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" event={"ID":"c0985a55-6ede-4214-87fe-27cb5668dd86","Type":"ContainerStarted","Data":"4a2babacecb85d591437730d153c99f65c58bb5126f5c38140ac743994bf0561"} Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.279384 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s8zz8_openstack-operators(1efe6d30-3c28-4945-8615-49cafec58641): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.281677 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podUID="1efe6d30-3c28-4945-8615-49cafec58641" Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.282284 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" event={"ID":"94d1ae33-41a7-414c-b0d9-cc843ca9fa47","Type":"ContainerStarted","Data":"093ba734e5c1522484c2235f29ffd0d37cb942752065f1d67f6c4be3bf977f28"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.285327 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" event={"ID":"57b58631-9efc-4cdb-bb89-47aa70a6bd98","Type":"ContainerStarted","Data":"56b55923ccc913747e89b2a803e64b7c6954edfb6454e7f29c003f1c7aeff395"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.287880 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" event={"ID":"9ff19137-02fd-4de1-9601-95a5c0fbbed0","Type":"ContainerStarted","Data":"aeb1e68d6ff88cb83012e2ff6c1c0c3bbf4b315de88af7fbacf5132c617f19ac"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.292225 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" event={"ID":"fb04ba1c-d6a0-40aa-b985-f4715cb11257","Type":"ContainerStarted","Data":"a4b43c447b9e333492d6b3531f4ea9a7af3f621239165a3e1a9b6207121981dc"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.294689 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" event={"ID":"a292ef63-66c6-4416-8212-7b06a9bb8761","Type":"ContainerStarted","Data":"233f1f1af14d3e56ceb897328cee0d671505ffcabf0ed83bb2d2c2a312153838"} Jan 21 10:53:28 crc kubenswrapper[4745]: I0121 10:53:28.930032 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.930225 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:28 crc kubenswrapper[4745]: E0121 10:53:28.930407 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert podName:2528950f-ec80-4609-a77c-d6fbb2768e3b nodeName:}" failed. No retries permitted until 2026-01-21 10:53:32.930392864 +0000 UTC m=+997.391180462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert") pod "infra-operator-controller-manager-77c48c7859-4nt9f" (UID: "2528950f-ec80-4609-a77c-d6fbb2768e3b") : secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.320734 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" event={"ID":"dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19","Type":"ContainerStarted","Data":"6511c97202e53afc4ad51aa64d2a8a313d0b927c7e42f26637561001fa7d8f45"} Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.323476 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" event={"ID":"1efe6d30-3c28-4945-8615-49cafec58641","Type":"ContainerStarted","Data":"6c8c2991592f9fb666ff9a51b66cef6e466abb75ef3afb8985bd3f08100d2eed"} Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.329434 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" event={"ID":"10226f41-eb60-45bf-a116-c51f3de0ea39","Type":"ContainerStarted","Data":"a405802f36b0cd1b720bbe0b0b95d86921345c6c23a9c3ebfbcf9168eacf4308"} Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.336543 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.336714 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podUID="1efe6d30-3c28-4945-8615-49cafec58641" Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.336743 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.336807 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:33.336793057 +0000 UTC m=+997.797580655 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.347637 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" event={"ID":"ab348be4-f24d-41f5-947a-7f49dc330aa9","Type":"ContainerStarted","Data":"4f5bf13e0e439a20e602d3dd6dcb8664be1557ee2a954e31ab7fff5d0709bb5a"} Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.355061 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" podUID="ab348be4-f24d-41f5-947a-7f49dc330aa9" Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.845175 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:29 crc kubenswrapper[4745]: I0121 10:53:29.845424 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.845338 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.845514 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:33.845491017 +0000 UTC m=+998.306278615 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.846074 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:29 crc kubenswrapper[4745]: E0121 10:53:29.846137 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:33.846113092 +0000 UTC m=+998.306900690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:30 crc kubenswrapper[4745]: E0121 10:53:30.367942 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podUID="1efe6d30-3c28-4945-8615-49cafec58641" Jan 21 10:53:30 crc kubenswrapper[4745]: E0121 10:53:30.386386 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:146961cac3291daf96c1ca2bc7bd52bc94d1f4787a0770e23205c2c9beb0d737\\\"\"" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" podUID="ab348be4-f24d-41f5-947a-7f49dc330aa9" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.032248 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.034857 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.054791 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.116451 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.116500 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9wcw\" (UniqueName: \"kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.116683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.218551 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.218673 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.218705 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9wcw\" (UniqueName: \"kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.220038 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.220284 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.252143 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9wcw\" (UniqueName: \"kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw\") pod \"community-operators-wnj4j\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:32 crc kubenswrapper[4745]: I0121 10:53:32.361344 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:53:33 crc kubenswrapper[4745]: I0121 10:53:33.028917 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.029070 4745 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.029124 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert podName:2528950f-ec80-4609-a77c-d6fbb2768e3b nodeName:}" failed. No retries permitted until 2026-01-21 10:53:41.029109336 +0000 UTC m=+1005.489896934 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert") pod "infra-operator-controller-manager-77c48c7859-4nt9f" (UID: "2528950f-ec80-4609-a77c-d6fbb2768e3b") : secret "infra-operator-webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: I0121 10:53:33.346290 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.346637 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.346694 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:41.346676586 +0000 UTC m=+1005.807464194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: I0121 10:53:33.851826 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:33 crc kubenswrapper[4745]: I0121 10:53:33.851952 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.852038 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.852085 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.852116 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:41.852095971 +0000 UTC m=+1006.312883569 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:33 crc kubenswrapper[4745]: E0121 10:53:33.852138 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:41.852124122 +0000 UTC m=+1006.312911720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.067767 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.078674 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2528950f-ec80-4609-a77c-d6fbb2768e3b-cert\") pod \"infra-operator-controller-manager-77c48c7859-4nt9f\" (UID: \"2528950f-ec80-4609-a77c-d6fbb2768e3b\") " pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.111933 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.372574 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.372754 4745 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.372825 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert podName:1f562ebe-222a-441b-9277-0aa69a0c0fb3 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:57.372803394 +0000 UTC m=+1021.833590992 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" (UID: "1f562ebe-222a-441b-9277-0aa69a0c0fb3") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.880290 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:41 crc kubenswrapper[4745]: I0121 10:53:41.880446 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.880488 4745 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.880556 4745 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.880564 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:57.880547358 +0000 UTC m=+1022.341334956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "webhook-server-cert" not found Jan 21 10:53:41 crc kubenswrapper[4745]: E0121 10:53:41.880585 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs podName:8ed49bb1-d169-4518-b064-3fb35fd1bad0 nodeName:}" failed. No retries permitted until 2026-01-21 10:53:57.880576539 +0000 UTC m=+1022.341364137 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs") pod "openstack-operator-controller-manager-78d57d4fdd-dxmll" (UID: "8ed49bb1-d169-4518-b064-3fb35fd1bad0") : secret "metrics-server-cert" not found Jan 21 10:53:42 crc kubenswrapper[4745]: E0121 10:53:42.362190 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843" Jan 21 10:53:42 crc kubenswrapper[4745]: E0121 10:53:42.362491 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qd5p8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5f8f495fcf-dh2t4_openstack-operators(dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:42 crc kubenswrapper[4745]: E0121 10:53:42.363770 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" podUID="dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19" Jan 21 10:53:42 crc kubenswrapper[4745]: E0121 10:53:42.460041 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:2e89109f5db66abf1afd15ef59bda35a53db40c5e59e020579ac5aa0acea1843\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" podUID="dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19" Jan 21 10:53:43 crc kubenswrapper[4745]: E0121 10:53:43.234770 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 21 10:53:43 crc kubenswrapper[4745]: E0121 10:53:43.234996 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bk8fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-sqhft_openstack-operators(784904b1-a1d9-4319-be67-34e3dfdc1c9a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:43 crc kubenswrapper[4745]: E0121 10:53:43.236265 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" podUID="784904b1-a1d9-4319-be67-34e3dfdc1c9a" Jan 21 10:53:43 crc kubenswrapper[4745]: E0121 10:53:43.467522 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" podUID="784904b1-a1d9-4319-be67-34e3dfdc1c9a" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.341355 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.341661 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5l9n7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-864f6b75bf-dvhql_openstack-operators(dfb1f262-fe24-45bf-8f75-0e2a81989f3f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.343012 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" podUID="dfb1f262-fe24-45bf-8f75-0e2a81989f3f" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.484186 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:fd2e631e747c35a95f083418f5829d06c4b830f1fdb322368ff6190b9887ea32\\\"\"" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" podUID="dfb1f262-fe24-45bf-8f75-0e2a81989f3f" Jan 21 10:53:45 crc kubenswrapper[4745]: I0121 10:53:45.866108 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:53:45 crc kubenswrapper[4745]: I0121 10:53:45.866373 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.883009 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.883209 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn68s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-cb4666565-x9mpf_openstack-operators(42c37f0d-415a-4a72-ae98-07551477c6cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:45 crc kubenswrapper[4745]: E0121 10:53:45.884384 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" podUID="42c37f0d-415a-4a72-ae98-07551477c6cc" Jan 21 10:53:46 crc kubenswrapper[4745]: E0121 10:53:46.490898 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0f440bf7dc937ce0135bdd328716686fd2f1320f453a9ac4e11e96383148ad6c\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" podUID="42c37f0d-415a-4a72-ae98-07551477c6cc" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.209880 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.210369 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xmmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7ddb5c749-bqhjj_openstack-operators(f99a5f65-e2aa-4476-b4c6-6566761f1ad2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.211667 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" podUID="f99a5f65-e2aa-4476-b4c6-6566761f1ad2" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.512058 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/barbican-operator@sha256:f0634d8cf7c2c2919ca248a6883ce43d6ae4ac59252c987a5cfe17643fe7d38a\\\"\"" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" podUID="f99a5f65-e2aa-4476-b4c6-6566761f1ad2" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.717880 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.718178 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lc7pf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-g4gpj_openstack-operators(b28edf64-70dc-4fc2-8d7f-c1f141cbd31e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:49 crc kubenswrapper[4745]: E0121 10:53:49.719387 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" podUID="b28edf64-70dc-4fc2-8d7f-c1f141cbd31e" Jan 21 10:53:50 crc kubenswrapper[4745]: E0121 10:53:50.519060 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" podUID="b28edf64-70dc-4fc2-8d7f-c1f141cbd31e" Jan 21 10:53:51 crc kubenswrapper[4745]: E0121 10:53:51.809202 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729" Jan 21 10:53:51 crc kubenswrapper[4745]: E0121 10:53:51.809389 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-48xrz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-7fc9b76cf6-bx656_openstack-operators(a96f3189-7bbc-404d-ad6d-05b8fefb65fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:51 crc kubenswrapper[4745]: E0121 10:53:51.810680 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podUID="a96f3189-7bbc-404d-ad6d-05b8fefb65fc" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.419163 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.419855 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-22lzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-78757b4889-clbcs_openstack-operators(2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.421076 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.537239 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ab629ec4ce57b5cde9cd6d75069e68edca441b97b7b5a3f58804e2e61766b729\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podUID="a96f3189-7bbc-404d-ad6d-05b8fefb65fc" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.537690 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:56c5f8b78445b3dbfc0d5afd9312906f6bef4dccf67302b0e4e5ca20bd263525\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.905400 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.905619 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rgkwm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-7cd8bc9dbb-q4ccb_openstack-operators(10226f41-eb60-45bf-a116-c51f3de0ea39): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:52 crc kubenswrapper[4745]: E0121 10:53:52.907337 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" podUID="10226f41-eb60-45bf-a116-c51f3de0ea39" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.379638 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.379873 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-czwn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-c87fff755-8xm9d_openstack-operators(c0985a55-6ede-4214-87fe-27cb5668dd86): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.381007 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podUID="c0985a55-6ede-4214-87fe-27cb5668dd86" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.542394 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:ff0b6c27e2d96afccd73fbbb5b5297a3f60c7f4f1dfd2a877152466697018d71\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podUID="c0985a55-6ede-4214-87fe-27cb5668dd86" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.542416 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:244a4906353b84899db16a89e1ebb64491c9f85e69327cb2a72b6da0142a6e5e\\\"\"" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" podUID="10226f41-eb60-45bf-a116-c51f3de0ea39" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.914255 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.914781 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mn2mg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-9f958b845-hw9zg_openstack-operators(bc9be084-edd6-4556-88af-354f416d451c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:53 crc kubenswrapper[4745]: E0121 10:53:53.916137 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" podUID="bc9be084-edd6-4556-88af-354f416d451c" Jan 21 10:53:54 crc kubenswrapper[4745]: E0121 10:53:54.547046 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:0d59a405f50b37c833e14c0f4987e95c8769d9ab06a7087078bdd02568c18ca8\\\"\"" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" podUID="bc9be084-edd6-4556-88af-354f416d451c" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.256662 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.257273 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k9kfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-65849867d6-g8j7m_openstack-operators(be658ac1-07b6-482b-8b99-35a75fcf3b50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.258637 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" podUID="be658ac1-07b6-482b-8b99-35a75fcf3b50" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.574184 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:6defa56fc6a5bfbd5b27d28ff7b1c7bc89b24b2ef956e2a6d97b2726f668a231\\\"\"" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" podUID="be658ac1-07b6-482b-8b99-35a75fcf3b50" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.774174 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.774371 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nxsh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-55db956ddc-j96sf_openstack-operators(a292ef63-66c6-4416-8212-7b06a9bb8761): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:56 crc kubenswrapper[4745]: E0121 10:53:56.775546 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" podUID="a292ef63-66c6-4416-8212-7b06a9bb8761" Jan 21 10:53:57 crc kubenswrapper[4745]: E0121 10:53:57.264755 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92" Jan 21 10:53:57 crc kubenswrapper[4745]: E0121 10:53:57.265016 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t595n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-85dd56d4cc-46lz5_openstack-operators(57b58631-9efc-4cdb-bb89-47aa70a6bd98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:57 crc kubenswrapper[4745]: E0121 10:53:57.266217 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" podUID="57b58631-9efc-4cdb-bb89-47aa70a6bd98" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.442327 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.452559 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1f562ebe-222a-441b-9277-0aa69a0c0fb3-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4\" (UID: \"1f562ebe-222a-441b-9277-0aa69a0c0fb3\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:57 crc kubenswrapper[4745]: E0121 10:53:57.581785 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:9404536bf7cb7c3818e1a0f92b53e4d7c02fe7942324f32894106f02f8fc7e92\\\"\"" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" podUID="57b58631-9efc-4cdb-bb89-47aa70a6bd98" Jan 21 10:53:57 crc kubenswrapper[4745]: E0121 10:53:57.582187 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:8b3bfb9e86618b7ac69443939b0968fae28a22cd62ea1e429b599ff9f8a5f8cf\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" podUID="a292ef63-66c6-4416-8212-7b06a9bb8761" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.687876 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tgdzr" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.695392 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.695497 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.952412 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.953084 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.959626 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-webhook-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:57 crc kubenswrapper[4745]: I0121 10:53:57.972298 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8ed49bb1-d169-4518-b064-3fb35fd1bad0-metrics-certs\") pod \"openstack-operator-controller-manager-78d57d4fdd-dxmll\" (UID: \"8ed49bb1-d169-4518-b064-3fb35fd1bad0\") " pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:58 crc kubenswrapper[4745]: I0121 10:53:58.032770 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7x4sw" Jan 21 10:53:58 crc kubenswrapper[4745]: I0121 10:53:58.040649 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:53:59 crc kubenswrapper[4745]: E0121 10:53:59.646628 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e" Jan 21 10:53:59 crc kubenswrapper[4745]: E0121 10:53:59.646800 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tn682,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-767fdc4f47-fh7ts_openstack-operators(fb04ba1c-d6a0-40aa-b985-f4715cb11257): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:53:59 crc kubenswrapper[4745]: E0121 10:53:59.648091 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" podUID="fb04ba1c-d6a0-40aa-b985-f4715cb11257" Jan 21 10:54:00 crc kubenswrapper[4745]: I0121 10:54:00.604789 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerStarted","Data":"9ed515580a60c88e4697810e2c65e7b002c2a7ffd3fa424db5f447559dff866f"} Jan 21 10:54:00 crc kubenswrapper[4745]: E0121 10:54:00.606150 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:393d7567eef4fd05af625389f5a7384c6bb75108b21b06183f1f5e33aac5417e\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" podUID="fb04ba1c-d6a0-40aa-b985-f4715cb11257" Jan 21 10:54:01 crc kubenswrapper[4745]: E0121 10:54:01.794609 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 21 10:54:01 crc kubenswrapper[4745]: E0121 10:54:01.795145 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qqkmp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-s8zz8_openstack-operators(1efe6d30-3c28-4945-8615-49cafec58641): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:54:01 crc kubenswrapper[4745]: E0121 10:54:01.796340 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podUID="1efe6d30-3c28-4945-8615-49cafec58641" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.321947 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f"] Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.481511 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4"] Jan 21 10:54:02 crc kubenswrapper[4745]: W0121 10:54:02.494172 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f562ebe_222a_441b_9277_0aa69a0c0fb3.slice/crio-5e9633ae55bdbf7a38f5a1b4fc417a74807272fa87d0380559db088d883305c1 WatchSource:0}: Error finding container 5e9633ae55bdbf7a38f5a1b4fc417a74807272fa87d0380559db088d883305c1: Status 404 returned error can't find the container with id 5e9633ae55bdbf7a38f5a1b4fc417a74807272fa87d0380559db088d883305c1 Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.635251 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll"] Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.646188 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" event={"ID":"2528950f-ec80-4609-a77c-d6fbb2768e3b","Type":"ContainerStarted","Data":"34d2b1b45681eb89e2bb490d6b8bb8d180825c81cf25e390ecb7e227cfe3ab5f"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.648083 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" event={"ID":"1f562ebe-222a-441b-9277-0aa69a0c0fb3","Type":"ContainerStarted","Data":"5e9633ae55bdbf7a38f5a1b4fc417a74807272fa87d0380559db088d883305c1"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.649342 4745 generic.go:334] "Generic (PLEG): container finished" podID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerID="412ddfbe00beb5fb34981a58f0ec770f80f36ba86b559955cf419056b86aa9e5" exitCode=0 Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.649414 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerDied","Data":"412ddfbe00beb5fb34981a58f0ec770f80f36ba86b559955cf419056b86aa9e5"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.670242 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" event={"ID":"42c37f0d-415a-4a72-ae98-07551477c6cc","Type":"ContainerStarted","Data":"4c3d07508a0571eb21150e041a60f25d4c861398e9c9dc3d289671a376ef5e14"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.670723 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.684713 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" event={"ID":"d9337025-a702-4dd2-b8a4-e807525a34f5","Type":"ContainerStarted","Data":"d60aefdac157dc3f2259d9c48fb410e7a6c47616a176ece9887fb0874a0f4658"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.685415 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.710296 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" event={"ID":"9ff19137-02fd-4de1-9601-95a5c0fbbed0","Type":"ContainerStarted","Data":"e94e56fae94481b759cd52507f2a32fd94113eed95b46d986b2d69926f0610f4"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.711180 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.721557 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" event={"ID":"dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19","Type":"ContainerStarted","Data":"35bb12116ff75a2c1286b28a207a630ed290611ed8839f0be56c3b2b526fba16"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.722407 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.726802 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" event={"ID":"f99a5f65-e2aa-4476-b4c6-6566761f1ad2","Type":"ContainerStarted","Data":"3f1d1bceafb0aaf10af2e3178a4ecc671d09a2b948395225c9e5d528cb21acfa"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.727553 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.734287 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" event={"ID":"ab348be4-f24d-41f5-947a-7f49dc330aa9","Type":"ContainerStarted","Data":"cc3ea9017225187e1ee7f863d5dcd3b4c9cf5790a4710bdfa10d259739c497fc"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.735026 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.741976 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" event={"ID":"784904b1-a1d9-4319-be67-34e3dfdc1c9a","Type":"ContainerStarted","Data":"5a4f5fd847e1fcc82ebc4097a4f6d47aaff02b5316cbd2bfb37f5c4ad19a42ae"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.742644 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.756933 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" podStartSLOduration=4.210593816 podStartE2EDuration="37.756912495s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.216612391 +0000 UTC m=+992.677399979" lastFinishedPulling="2026-01-21 10:54:01.76293105 +0000 UTC m=+1026.223718658" observedRunningTime="2026-01-21 10:54:02.755691384 +0000 UTC m=+1027.216478982" watchObservedRunningTime="2026-01-21 10:54:02.756912495 +0000 UTC m=+1027.217700093" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.767850 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" podStartSLOduration=2.980026497 podStartE2EDuration="37.767831702s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.13764198 +0000 UTC m=+991.598429578" lastFinishedPulling="2026-01-21 10:54:01.925447175 +0000 UTC m=+1026.386234783" observedRunningTime="2026-01-21 10:54:02.723777604 +0000 UTC m=+1027.184565192" watchObservedRunningTime="2026-01-21 10:54:02.767831702 +0000 UTC m=+1027.228619300" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.768163 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" event={"ID":"dfb1f262-fe24-45bf-8f75-0e2a81989f3f","Type":"ContainerStarted","Data":"1cb1961dc64851d47e3793fb87c13c94bdd1c22a02dbf1e5b1711177e027d83d"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.768647 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.782158 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" event={"ID":"94d1ae33-41a7-414c-b0d9-cc843ca9fa47","Type":"ContainerStarted","Data":"abd9bc64c0ee29dde8887e6c5aaa4c9c0e31c9b63a4e8f6383b9e1b6dae74ef6"} Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.783002 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.791919 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" podStartSLOduration=7.103386099 podStartE2EDuration="38.791891992s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.942787783 +0000 UTC m=+992.403575381" lastFinishedPulling="2026-01-21 10:53:59.631293686 +0000 UTC m=+1024.092081274" observedRunningTime="2026-01-21 10:54:02.787116682 +0000 UTC m=+1027.247904270" watchObservedRunningTime="2026-01-21 10:54:02.791891992 +0000 UTC m=+1027.252679590" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.831640 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" podStartSLOduration=5.757511484 podStartE2EDuration="38.831615271s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:26.555438865 +0000 UTC m=+991.016226463" lastFinishedPulling="2026-01-21 10:53:59.629542652 +0000 UTC m=+1024.090330250" observedRunningTime="2026-01-21 10:54:02.817078691 +0000 UTC m=+1027.277866289" watchObservedRunningTime="2026-01-21 10:54:02.831615271 +0000 UTC m=+1027.292402869" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.879193 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" podStartSLOduration=4.205198692 podStartE2EDuration="38.879175338s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.091757866 +0000 UTC m=+991.552545464" lastFinishedPulling="2026-01-21 10:54:01.765734492 +0000 UTC m=+1026.226522110" observedRunningTime="2026-01-21 10:54:02.877402023 +0000 UTC m=+1027.338189621" watchObservedRunningTime="2026-01-21 10:54:02.879175338 +0000 UTC m=+1027.339962926" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.973027 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" podStartSLOduration=3.713259738 podStartE2EDuration="38.973002938s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:26.719163581 +0000 UTC m=+991.179951179" lastFinishedPulling="2026-01-21 10:54:01.978906771 +0000 UTC m=+1026.439694379" observedRunningTime="2026-01-21 10:54:02.966444842 +0000 UTC m=+1027.427232440" watchObservedRunningTime="2026-01-21 10:54:02.973002938 +0000 UTC m=+1027.433790536" Jan 21 10:54:02 crc kubenswrapper[4745]: I0121 10:54:02.975714 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" podStartSLOduration=6.467714444 podStartE2EDuration="37.975703427s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.12277104 +0000 UTC m=+992.583558638" lastFinishedPulling="2026-01-21 10:53:59.630760023 +0000 UTC m=+1024.091547621" observedRunningTime="2026-01-21 10:54:02.939691623 +0000 UTC m=+1027.400479211" watchObservedRunningTime="2026-01-21 10:54:02.975703427 +0000 UTC m=+1027.436491115" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.074093 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" podStartSLOduration=3.481958505 podStartE2EDuration="38.074063923s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.434793691 +0000 UTC m=+991.895581289" lastFinishedPulling="2026-01-21 10:54:02.026899109 +0000 UTC m=+1026.487686707" observedRunningTime="2026-01-21 10:54:03.014480741 +0000 UTC m=+1027.475268339" watchObservedRunningTime="2026-01-21 10:54:03.074063923 +0000 UTC m=+1027.534851521" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.077203 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" podStartSLOduration=4.419106866 podStartE2EDuration="38.077192062s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.269003511 +0000 UTC m=+992.729791109" lastFinishedPulling="2026-01-21 10:54:01.927088707 +0000 UTC m=+1026.387876305" observedRunningTime="2026-01-21 10:54:03.074015492 +0000 UTC m=+1027.534803090" watchObservedRunningTime="2026-01-21 10:54:03.077192062 +0000 UTC m=+1027.537979660" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.824882 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" event={"ID":"b28edf64-70dc-4fc2-8d7f-c1f141cbd31e","Type":"ContainerStarted","Data":"888f51ebb4f08ab4f2948fe0d7fb2237b36a03c303a6c0d16534aea6c28ec64a"} Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.825835 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.834266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerStarted","Data":"122bebb0b8584447d562c7ec9cd5216df7603d558759a59d6a95702c62d55b7f"} Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.837058 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" event={"ID":"8ed49bb1-d169-4518-b064-3fb35fd1bad0","Type":"ContainerStarted","Data":"42d4b8dafe63f4ed648ba9ed30cfb242d84b9a9fb7cab89058b8c24aebe4465c"} Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.837088 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.837101 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" event={"ID":"8ed49bb1-d169-4518-b064-3fb35fd1bad0","Type":"ContainerStarted","Data":"938c57895f648bd525ac20586d1261bf35ba5ba32945940c2bda653fde0b46af"} Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.845406 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" podStartSLOduration=4.293531563 podStartE2EDuration="39.845386727s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:26.997309679 +0000 UTC m=+991.458097277" lastFinishedPulling="2026-01-21 10:54:02.549164843 +0000 UTC m=+1027.009952441" observedRunningTime="2026-01-21 10:54:03.839519548 +0000 UTC m=+1028.300307146" watchObservedRunningTime="2026-01-21 10:54:03.845386727 +0000 UTC m=+1028.306174325" Jan 21 10:54:03 crc kubenswrapper[4745]: I0121 10:54:03.887262 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" podStartSLOduration=38.887246009 podStartE2EDuration="38.887246009s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:54:03.882742165 +0000 UTC m=+1028.343529763" watchObservedRunningTime="2026-01-21 10:54:03.887246009 +0000 UTC m=+1028.348033607" Jan 21 10:54:04 crc kubenswrapper[4745]: I0121 10:54:04.861082 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" event={"ID":"a96f3189-7bbc-404d-ad6d-05b8fefb65fc","Type":"ContainerStarted","Data":"821437e20cec909080c13438fac073e93e1a5323040b089a5f324b5ec6f99162"} Jan 21 10:54:04 crc kubenswrapper[4745]: I0121 10:54:04.863469 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:54:04 crc kubenswrapper[4745]: I0121 10:54:04.868364 4745 generic.go:334] "Generic (PLEG): container finished" podID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerID="122bebb0b8584447d562c7ec9cd5216df7603d558759a59d6a95702c62d55b7f" exitCode=0 Jan 21 10:54:04 crc kubenswrapper[4745]: I0121 10:54:04.871930 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerDied","Data":"122bebb0b8584447d562c7ec9cd5216df7603d558759a59d6a95702c62d55b7f"} Jan 21 10:54:04 crc kubenswrapper[4745]: I0121 10:54:04.892909 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podStartSLOduration=2.8311708490000003 podStartE2EDuration="39.892896769s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.399083774 +0000 UTC m=+991.859871372" lastFinishedPulling="2026-01-21 10:54:04.460809704 +0000 UTC m=+1028.921597292" observedRunningTime="2026-01-21 10:54:04.877178851 +0000 UTC m=+1029.337966449" watchObservedRunningTime="2026-01-21 10:54:04.892896769 +0000 UTC m=+1029.353684357" Jan 21 10:54:08 crc kubenswrapper[4745]: I0121 10:54:08.052798 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.012504 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.929630 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerStarted","Data":"f9ca87a8672655bdacb44c803d2a930f9e4922f0e81d777b4495cceca1a17f28"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.931161 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" event={"ID":"be658ac1-07b6-482b-8b99-35a75fcf3b50","Type":"ContainerStarted","Data":"2321798c88013a534c1ef9cabce588e57afea69144b493383e5dc76f44c51fdb"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.934249 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" event={"ID":"57b58631-9efc-4cdb-bb89-47aa70a6bd98","Type":"ContainerStarted","Data":"deab5d5b3d8e8a4115ff512f53720e5055d5817cc9d63e3e145d6ee992fa7313"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.934452 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.936468 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" event={"ID":"2528950f-ec80-4609-a77c-d6fbb2768e3b","Type":"ContainerStarted","Data":"d4f5d0a2118a50c99f5750ac5c4a2e3daaea78283a2ed148afd391f483e7c61f"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.936581 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.938394 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" event={"ID":"10226f41-eb60-45bf-a116-c51f3de0ea39","Type":"ContainerStarted","Data":"1d7515fd1825fd8961dda2f4e38169ab994205d16c3dad1aeb39f152b7de52d9"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.938595 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.939999 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" event={"ID":"bc9be084-edd6-4556-88af-354f416d451c","Type":"ContainerStarted","Data":"3d8bf3147e6d26d1e08f25a508e6838e22252944906c9a5eb6d29b6f379db82b"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.940193 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.941955 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" event={"ID":"a292ef63-66c6-4416-8212-7b06a9bb8761","Type":"ContainerStarted","Data":"e5428840d4f47afa4d9187efb547433661e007a04236b1f66b0b85a035adf070"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.942141 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.943625 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" event={"ID":"2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462","Type":"ContainerStarted","Data":"a122b659f56aaef7d9c2d86ce9f9020c74c464e0091b6f82cfade325b21abd36"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.943748 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.945463 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" event={"ID":"c0985a55-6ede-4214-87fe-27cb5668dd86","Type":"ContainerStarted","Data":"2eee870d56a3f8784fe93a88397f9f96e3c18d2ef304f33b633b1cda8d217855"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.945654 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.947146 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" event={"ID":"1f562ebe-222a-441b-9277-0aa69a0c0fb3","Type":"ContainerStarted","Data":"df36c558afcdc4fde07cf8615f8d86c2f09301dedb552f70cd9a1a27d7953559"} Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.947267 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:54:10 crc kubenswrapper[4745]: I0121 10:54:10.966460 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnj4j" podStartSLOduration=31.572684387 podStartE2EDuration="38.966423857s" podCreationTimestamp="2026-01-21 10:53:32 +0000 UTC" firstStartedPulling="2026-01-21 10:54:02.681627794 +0000 UTC m=+1027.142415392" lastFinishedPulling="2026-01-21 10:54:10.075367264 +0000 UTC m=+1034.536154862" observedRunningTime="2026-01-21 10:54:10.961182054 +0000 UTC m=+1035.421969652" watchObservedRunningTime="2026-01-21 10:54:10.966423857 +0000 UTC m=+1035.427211455" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.019779 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" podStartSLOduration=38.452358502 podStartE2EDuration="46.01976283s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:54:02.499887802 +0000 UTC m=+1026.960675400" lastFinishedPulling="2026-01-21 10:54:10.06729213 +0000 UTC m=+1034.528079728" observedRunningTime="2026-01-21 10:54:11.01462622 +0000 UTC m=+1035.475413818" watchObservedRunningTime="2026-01-21 10:54:11.01976283 +0000 UTC m=+1035.480550428" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.055475 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" podStartSLOduration=3.675368872 podStartE2EDuration="46.055444965s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.108846796 +0000 UTC m=+992.569634394" lastFinishedPulling="2026-01-21 10:54:10.488922889 +0000 UTC m=+1034.949710487" observedRunningTime="2026-01-21 10:54:11.055163728 +0000 UTC m=+1035.515951326" watchObservedRunningTime="2026-01-21 10:54:11.055444965 +0000 UTC m=+1035.516232563" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.108919 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" podStartSLOduration=4.201475443 podStartE2EDuration="46.108887091s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.182635839 +0000 UTC m=+992.643423437" lastFinishedPulling="2026-01-21 10:54:10.090047487 +0000 UTC m=+1034.550835085" observedRunningTime="2026-01-21 10:54:11.092957218 +0000 UTC m=+1035.553744816" watchObservedRunningTime="2026-01-21 10:54:11.108887091 +0000 UTC m=+1035.569674689" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.121483 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" podStartSLOduration=38.412468961 podStartE2EDuration="46.121464221s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:54:02.36502138 +0000 UTC m=+1026.825808968" lastFinishedPulling="2026-01-21 10:54:10.07401663 +0000 UTC m=+1034.534804228" observedRunningTime="2026-01-21 10:54:11.11199736 +0000 UTC m=+1035.572784958" watchObservedRunningTime="2026-01-21 10:54:11.121464221 +0000 UTC m=+1035.582251819" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.191681 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" podStartSLOduration=5.0070863 podStartE2EDuration="47.191662212s" podCreationTimestamp="2026-01-21 10:53:24 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.91119808 +0000 UTC m=+992.371985678" lastFinishedPulling="2026-01-21 10:54:10.095773992 +0000 UTC m=+1034.556561590" observedRunningTime="2026-01-21 10:54:11.184823218 +0000 UTC m=+1035.645610816" watchObservedRunningTime="2026-01-21 10:54:11.191662212 +0000 UTC m=+1035.652449810" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.260286 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podStartSLOduration=4.273135402 podStartE2EDuration="46.260269093s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.109072052 +0000 UTC m=+992.569859650" lastFinishedPulling="2026-01-21 10:54:10.096205743 +0000 UTC m=+1034.556993341" observedRunningTime="2026-01-21 10:54:11.259564495 +0000 UTC m=+1035.720352093" watchObservedRunningTime="2026-01-21 10:54:11.260269093 +0000 UTC m=+1035.721056691" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.262105 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podStartSLOduration=3.755405184 podStartE2EDuration="46.26210014s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.567243292 +0000 UTC m=+992.028030890" lastFinishedPulling="2026-01-21 10:54:10.073938248 +0000 UTC m=+1034.534725846" observedRunningTime="2026-01-21 10:54:11.237763302 +0000 UTC m=+1035.698550900" watchObservedRunningTime="2026-01-21 10:54:11.26210014 +0000 UTC m=+1035.722887738" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.320086 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" podStartSLOduration=4.423598392 podStartE2EDuration="46.320051691s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.198607214 +0000 UTC m=+992.659394812" lastFinishedPulling="2026-01-21 10:54:10.095060503 +0000 UTC m=+1034.555848111" observedRunningTime="2026-01-21 10:54:11.291262969 +0000 UTC m=+1035.752050567" watchObservedRunningTime="2026-01-21 10:54:11.320051691 +0000 UTC m=+1035.780839289" Jan 21 10:54:11 crc kubenswrapper[4745]: I0121 10:54:11.990720 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" podStartSLOduration=3.926943707 podStartE2EDuration="46.990702529s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.660867298 +0000 UTC m=+992.121654896" lastFinishedPulling="2026-01-21 10:54:10.72462612 +0000 UTC m=+1035.185413718" observedRunningTime="2026-01-21 10:54:11.984265806 +0000 UTC m=+1036.445053404" watchObservedRunningTime="2026-01-21 10:54:11.990702529 +0000 UTC m=+1036.451490127" Jan 21 10:54:12 crc kubenswrapper[4745]: I0121 10:54:12.362442 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:12 crc kubenswrapper[4745]: I0121 10:54:12.362889 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:12 crc kubenswrapper[4745]: I0121 10:54:12.966085 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" event={"ID":"fb04ba1c-d6a0-40aa-b985-f4715cb11257","Type":"ContainerStarted","Data":"95007eefbfe1a0a16643e419da1d901796c67937cfc90d5bddd01dbe10f5c5ff"} Jan 21 10:54:12 crc kubenswrapper[4745]: I0121 10:54:12.966489 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:54:13 crc kubenswrapper[4745]: E0121 10:54:13.002008 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podUID="1efe6d30-3c28-4945-8615-49cafec58641" Jan 21 10:54:13 crc kubenswrapper[4745]: I0121 10:54:13.023770 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" podStartSLOduration=3.524024582 podStartE2EDuration="48.023752535s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:27.595588522 +0000 UTC m=+992.056376120" lastFinishedPulling="2026-01-21 10:54:12.095316475 +0000 UTC m=+1036.556104073" observedRunningTime="2026-01-21 10:54:12.9904684 +0000 UTC m=+1037.451256008" watchObservedRunningTime="2026-01-21 10:54:13.023752535 +0000 UTC m=+1037.484540133" Jan 21 10:54:13 crc kubenswrapper[4745]: I0121 10:54:13.414407 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-wnj4j" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="registry-server" probeResult="failure" output=< Jan 21 10:54:13 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 10:54:13 crc kubenswrapper[4745]: > Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.069763 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7ddb5c749-bqhjj" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.082029 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-9b68f5989-qcrlk" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.352867 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.400123 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.453367 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.505111 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.519571 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.611370 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.654325 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.683226 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.801447 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.848397 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.850572 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.868095 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:54:15 crc kubenswrapper[4745]: I0121 10:54:15.868143 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.139794 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-55db956ddc-j96sf" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.197060 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-85dd56d4cc-46lz5" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.197382 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-7cd8bc9dbb-q4ccb" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.207214 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-686df47fcb-8v4t6" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.252289 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-64cd966744-bg5mt" Jan 21 10:54:16 crc kubenswrapper[4745]: I0121 10:54:16.366412 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" Jan 21 10:54:17 crc kubenswrapper[4745]: I0121 10:54:17.705997 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4" Jan 21 10:54:21 crc kubenswrapper[4745]: I0121 10:54:21.127078 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-77c48c7859-4nt9f" Jan 21 10:54:22 crc kubenswrapper[4745]: I0121 10:54:22.455471 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:22 crc kubenswrapper[4745]: I0121 10:54:22.501385 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:22 crc kubenswrapper[4745]: I0121 10:54:22.705203 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:54:24 crc kubenswrapper[4745]: I0121 10:54:24.045643 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnj4j" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="registry-server" containerID="cri-o://f9ca87a8672655bdacb44c803d2a930f9e4922f0e81d777b4495cceca1a17f28" gracePeriod=2 Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.054091 4745 generic.go:334] "Generic (PLEG): container finished" podID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerID="f9ca87a8672655bdacb44c803d2a930f9e4922f0e81d777b4495cceca1a17f28" exitCode=0 Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.054140 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerDied","Data":"f9ca87a8672655bdacb44c803d2a930f9e4922f0e81d777b4495cceca1a17f28"} Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.558482 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.623810 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.818929 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content\") pod \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.819703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9wcw\" (UniqueName: \"kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw\") pod \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.819755 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities\") pod \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\" (UID: \"33408533-2ed3-4fd9-aaf4-e4c832ff7805\") " Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.820723 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities" (OuterVolumeSpecName: "utilities") pod "33408533-2ed3-4fd9-aaf4-e4c832ff7805" (UID: "33408533-2ed3-4fd9-aaf4-e4c832ff7805"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.826977 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw" (OuterVolumeSpecName: "kube-api-access-b9wcw") pod "33408533-2ed3-4fd9-aaf4-e4c832ff7805" (UID: "33408533-2ed3-4fd9-aaf4-e4c832ff7805"). InnerVolumeSpecName "kube-api-access-b9wcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.868779 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33408533-2ed3-4fd9-aaf4-e4c832ff7805" (UID: "33408533-2ed3-4fd9-aaf4-e4c832ff7805"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.920621 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.920661 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33408533-2ed3-4fd9-aaf4-e4c832ff7805-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:25 crc kubenswrapper[4745]: I0121 10:54:25.920677 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9wcw\" (UniqueName: \"kubernetes.io/projected/33408533-2ed3-4fd9-aaf4-e4c832ff7805-kube-api-access-b9wcw\") on node \"crc\" DevicePath \"\"" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.064091 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" event={"ID":"1efe6d30-3c28-4945-8615-49cafec58641","Type":"ContainerStarted","Data":"6fb5ce1ead709de33bd407271127390dc06363ebd87809a237bda485ce118260"} Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.068099 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnj4j" event={"ID":"33408533-2ed3-4fd9-aaf4-e4c832ff7805","Type":"ContainerDied","Data":"9ed515580a60c88e4697810e2c65e7b002c2a7ffd3fa424db5f447559dff866f"} Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.068189 4745 scope.go:117] "RemoveContainer" containerID="f9ca87a8672655bdacb44c803d2a930f9e4922f0e81d777b4495cceca1a17f28" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.068362 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnj4j" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.092566 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-s8zz8" podStartSLOduration=4.128217105 podStartE2EDuration="1m1.092508658s" podCreationTimestamp="2026-01-21 10:53:25 +0000 UTC" firstStartedPulling="2026-01-21 10:53:28.27921629 +0000 UTC m=+992.740003878" lastFinishedPulling="2026-01-21 10:54:25.243507833 +0000 UTC m=+1049.704295431" observedRunningTime="2026-01-21 10:54:26.083697025 +0000 UTC m=+1050.544484633" watchObservedRunningTime="2026-01-21 10:54:26.092508658 +0000 UTC m=+1050.553296266" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.108424 4745 scope.go:117] "RemoveContainer" containerID="122bebb0b8584447d562c7ec9cd5216df7603d558759a59d6a95702c62d55b7f" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.118264 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.140283 4745 scope.go:117] "RemoveContainer" containerID="412ddfbe00beb5fb34981a58f0ec770f80f36ba86b559955cf419056b86aa9e5" Jan 21 10:54:26 crc kubenswrapper[4745]: I0121 10:54:26.144089 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnj4j"] Jan 21 10:54:28 crc kubenswrapper[4745]: I0121 10:54:28.008337 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" path="/var/lib/kubelet/pods/33408533-2ed3-4fd9-aaf4-e4c832ff7805/volumes" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.625386 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:54:41 crc kubenswrapper[4745]: E0121 10:54:41.626392 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="extract-content" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.626408 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="extract-content" Jan 21 10:54:41 crc kubenswrapper[4745]: E0121 10:54:41.626421 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="registry-server" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.626430 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="registry-server" Jan 21 10:54:41 crc kubenswrapper[4745]: E0121 10:54:41.626447 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="extract-utilities" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.626456 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="extract-utilities" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.626651 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="33408533-2ed3-4fd9-aaf4-e4c832ff7805" containerName="registry-server" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.629239 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.632259 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.632750 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-r4ptd" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.632867 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.637959 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.654469 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.672118 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.673193 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.685304 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.710338 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.791752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56np\" (UniqueName: \"kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.792167 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv9g2\" (UniqueName: \"kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.792312 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.792422 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.792580 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.894149 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.894207 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56np\" (UniqueName: \"kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.894237 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv9g2\" (UniqueName: \"kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.894267 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.894303 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.895316 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.895320 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.895732 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.921735 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv9g2\" (UniqueName: \"kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2\") pod \"dnsmasq-dns-675f4bcbfc-gcdhw\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.934621 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56np\" (UniqueName: \"kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np\") pod \"dnsmasq-dns-78dd6ddcc-hzfr4\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:41 crc kubenswrapper[4745]: I0121 10:54:41.960362 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:54:42 crc kubenswrapper[4745]: I0121 10:54:42.002230 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:54:42 crc kubenswrapper[4745]: I0121 10:54:42.515007 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:54:42 crc kubenswrapper[4745]: W0121 10:54:42.519793 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec267dae_af54_4295_a5a2_4dd05b1369fc.slice/crio-83052c34587cd32f4e8c13e40d64455507487c54fca6a9ee4cd8ed50401d85f6 WatchSource:0}: Error finding container 83052c34587cd32f4e8c13e40d64455507487c54fca6a9ee4cd8ed50401d85f6: Status 404 returned error can't find the container with id 83052c34587cd32f4e8c13e40d64455507487c54fca6a9ee4cd8ed50401d85f6 Jan 21 10:54:42 crc kubenswrapper[4745]: I0121 10:54:42.543522 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:54:42 crc kubenswrapper[4745]: W0121 10:54:42.547472 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod092d2c26_9a6c_4402_99d6_a8cd70a198dc.slice/crio-4106e8dfe15189153482fcfdd2ddd0ff99a192565e7bed3dbfddafbe4d26e3fd WatchSource:0}: Error finding container 4106e8dfe15189153482fcfdd2ddd0ff99a192565e7bed3dbfddafbe4d26e3fd: Status 404 returned error can't find the container with id 4106e8dfe15189153482fcfdd2ddd0ff99a192565e7bed3dbfddafbe4d26e3fd Jan 21 10:54:43 crc kubenswrapper[4745]: I0121 10:54:43.197992 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" event={"ID":"092d2c26-9a6c-4402-99d6-a8cd70a198dc","Type":"ContainerStarted","Data":"4106e8dfe15189153482fcfdd2ddd0ff99a192565e7bed3dbfddafbe4d26e3fd"} Jan 21 10:54:43 crc kubenswrapper[4745]: I0121 10:54:43.199336 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" event={"ID":"ec267dae-af54-4295-a5a2-4dd05b1369fc","Type":"ContainerStarted","Data":"83052c34587cd32f4e8c13e40d64455507487c54fca6a9ee4cd8ed50401d85f6"} Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.757842 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.847342 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.851938 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.903607 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.956271 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.956319 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cjcn\" (UniqueName: \"kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:44 crc kubenswrapper[4745]: I0121 10:54:44.956419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.060360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.060418 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.060442 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cjcn\" (UniqueName: \"kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.061481 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.062774 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.097714 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cjcn\" (UniqueName: \"kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn\") pod \"dnsmasq-dns-666b6646f7-gncqr\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.176772 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.281301 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.313376 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.314918 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.357266 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.367058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.367144 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.367226 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vjkf\" (UniqueName: \"kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.468103 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.468187 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vjkf\" (UniqueName: \"kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.468219 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.469036 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.469580 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.497094 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vjkf\" (UniqueName: \"kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf\") pod \"dnsmasq-dns-57d769cc4f-wrpzl\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.653130 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.866991 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.867305 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.867375 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.870084 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:54:45 crc kubenswrapper[4745]: I0121 10:54:45.870186 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59" gracePeriod=600 Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.051683 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.089975 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.091149 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.105410 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.120014 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.120210 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.120328 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.120576 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nlk4f" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.120681 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.126443 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.144915 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203707 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203754 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203782 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203796 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203813 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203856 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203881 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203910 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203927 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbv7r\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203948 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.203968 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305457 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305776 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305803 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305819 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305835 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305876 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305905 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305933 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305952 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbv7r\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305972 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.305991 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.306562 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.306559 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.307164 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.307399 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.307421 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.307875 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.324336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.325376 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.327157 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" event={"ID":"6d5fae24-b6a4-48e8-b83a-beee522c1a26","Type":"ContainerStarted","Data":"fa380c76a650cdc010dfa42985b585bb69823cd7a1f490772926fdbb4f26cbcb"} Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.327260 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.329731 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbv7r\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.347350 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.355375 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59" exitCode=0 Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.355419 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59"} Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.355452 4745 scope.go:117] "RemoveContainer" containerID="5b1c6cf55f7b7acda4bdbdb072152cc988d22c5663c32b750b1831934e03f8b3" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.362461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.495657 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.503687 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.515323 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.518765 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.518881 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.521455 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.527617 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rsjwr" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.527709 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.528708 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.528775 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.529066 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.594244 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:54:46 crc kubenswrapper[4745]: W0121 10:54:46.624848 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb446ccbb_1565_4fcd_821b_bf826666bc07.slice/crio-87d448a3256ff1132c83152f7ae61773c3e569bfeaa913c4bb463f08d9b80ae6 WatchSource:0}: Error finding container 87d448a3256ff1132c83152f7ae61773c3e569bfeaa913c4bb463f08d9b80ae6: Status 404 returned error can't find the container with id 87d448a3256ff1132c83152f7ae61773c3e569bfeaa913c4bb463f08d9b80ae6 Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626377 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626437 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626472 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626570 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626617 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626703 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626771 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626849 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.626950 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.627034 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.627140 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6v2t\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729267 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729347 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729376 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729415 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729462 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729494 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729545 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729567 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729595 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.729689 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6v2t\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.730745 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.731090 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.732430 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.732810 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.739841 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.842185 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.846427 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.846435 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.848101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.849132 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.849773 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.854865 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6v2t\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t\") pod \"rabbitmq-cell1-server-0\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:46 crc kubenswrapper[4745]: I0121 10:54:46.870640 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.187390 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.198744 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.200561 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.205236 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.206152 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.206164 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-cps77" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.206355 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.219163 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.253666 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368280 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368368 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368417 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368490 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368564 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65pxm\" (UniqueName: \"kubernetes.io/projected/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kube-api-access-65pxm\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368619 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368698 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.368734 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.374613 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerStarted","Data":"9d42dc478c293ac8e0e9475025367014a9e2a6046c60868f5e366c9f4c4d788d"} Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.375523 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" event={"ID":"b446ccbb-1565-4fcd-821b-bf826666bc07","Type":"ContainerStarted","Data":"87d448a3256ff1132c83152f7ae61773c3e569bfeaa913c4bb463f08d9b80ae6"} Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.392504 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b"} Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.469663 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470122 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65pxm\" (UniqueName: \"kubernetes.io/projected/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kube-api-access-65pxm\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470171 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470221 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470246 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470271 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470292 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470315 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470778 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470811 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kolla-config\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.470874 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.472802 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.473273 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-config-data-default\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.480049 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.480156 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.495219 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65pxm\" (UniqueName: \"kubernetes.io/projected/c2b5df3e-a44d-42ff-96a4-2bfd32db45bf-kube-api-access-65pxm\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.556618 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf\") " pod="openstack/openstack-galera-0" Jan 21 10:54:47 crc kubenswrapper[4745]: I0121 10:54:47.611693 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.046307 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.445078 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerStarted","Data":"760432a3e6bc6dd9fa463c641ce89dc33218dc7c8537b9862a6a1ed30c0bba05"} Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.698351 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.869485 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.874869 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.879170 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.879771 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-m579q" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.884299 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.901148 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.937652 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.939178 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.954685 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.954866 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xqmj4" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.955009 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.955150 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.958663 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.958713 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppkl6\" (UniqueName: \"kubernetes.io/projected/9253af27-9c32-4977-9632-266bb434fd18-kube-api-access-ppkl6\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.958741 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-config-data\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.958759 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:48 crc kubenswrapper[4745]: I0121 10:54:48.958822 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-kolla-config\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.007332 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060682 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw4l5\" (UniqueName: \"kubernetes.io/projected/0dd4138e-532c-446d-84ba-6bf954dfbd03-kube-api-access-vw4l5\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060760 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060795 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060885 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-kolla-config\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060958 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.060991 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061020 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061050 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061097 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppkl6\" (UniqueName: \"kubernetes.io/projected/9253af27-9c32-4977-9632-266bb434fd18-kube-api-access-ppkl6\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061159 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-config-data\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.061245 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.062967 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-config-data\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.067429 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9253af27-9c32-4977-9632-266bb434fd18-kolla-config\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.075422 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.082582 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9253af27-9c32-4977-9632-266bb434fd18-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.086629 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppkl6\" (UniqueName: \"kubernetes.io/projected/9253af27-9c32-4977-9632-266bb434fd18-kube-api-access-ppkl6\") pod \"memcached-0\" (UID: \"9253af27-9c32-4977-9632-266bb434fd18\") " pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.167560 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170245 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170306 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170460 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170501 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw4l5\" (UniqueName: \"kubernetes.io/projected/0dd4138e-532c-446d-84ba-6bf954dfbd03-kube-api-access-vw4l5\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170579 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170607 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.170708 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.171068 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.168970 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.188073 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.193035 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.205836 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0dd4138e-532c-446d-84ba-6bf954dfbd03-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.222302 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.222956 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.238671 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw4l5\" (UniqueName: \"kubernetes.io/projected/0dd4138e-532c-446d-84ba-6bf954dfbd03-kube-api-access-vw4l5\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.239637 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dd4138e-532c-446d-84ba-6bf954dfbd03-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.268158 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"0dd4138e-532c-446d-84ba-6bf954dfbd03\") " pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.293349 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 21 10:54:49 crc kubenswrapper[4745]: I0121 10:54:49.600464 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf","Type":"ContainerStarted","Data":"d804871a61ee52ae03587bbd27a830d2f7507db584c2e9fc1914efdfbee50efa"} Jan 21 10:54:50 crc kubenswrapper[4745]: I0121 10:54:50.162974 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 21 10:54:50 crc kubenswrapper[4745]: I0121 10:54:50.175742 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 21 10:54:50 crc kubenswrapper[4745]: I0121 10:54:50.627081 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9253af27-9c32-4977-9632-266bb434fd18","Type":"ContainerStarted","Data":"6a0e4763a16155fc33f7b6d6402a1dee4ff8fdf5246bb7d8e4ed84d04fd7d3fe"} Jan 21 10:54:50 crc kubenswrapper[4745]: I0121 10:54:50.675412 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0dd4138e-532c-446d-84ba-6bf954dfbd03","Type":"ContainerStarted","Data":"885a0f71899bf2e6d4b3277d37d531cf2e092f59ccaecbcf39c2ed98348aaef9"} Jan 21 10:54:51 crc kubenswrapper[4745]: I0121 10:54:51.877042 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 10:54:51 crc kubenswrapper[4745]: I0121 10:54:51.878459 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 10:54:51 crc kubenswrapper[4745]: I0121 10:54:51.883825 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-566nj" Jan 21 10:54:51 crc kubenswrapper[4745]: I0121 10:54:51.916627 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 10:54:51 crc kubenswrapper[4745]: I0121 10:54:51.961133 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l67wg\" (UniqueName: \"kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg\") pod \"kube-state-metrics-0\" (UID: \"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd\") " pod="openstack/kube-state-metrics-0" Jan 21 10:54:52 crc kubenswrapper[4745]: I0121 10:54:52.064744 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l67wg\" (UniqueName: \"kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg\") pod \"kube-state-metrics-0\" (UID: \"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd\") " pod="openstack/kube-state-metrics-0" Jan 21 10:54:52 crc kubenswrapper[4745]: I0121 10:54:52.094170 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l67wg\" (UniqueName: \"kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg\") pod \"kube-state-metrics-0\" (UID: \"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd\") " pod="openstack/kube-state-metrics-0" Jan 21 10:54:52 crc kubenswrapper[4745]: I0121 10:54:52.221223 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.069230 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-t8gd4"] Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.074800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.087270 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.087274 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-cxlp2" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.091109 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.109212 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-xs6fp"] Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.110904 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129042 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/113ad23b-2a19-4cef-a99b-7b61d3e0779f-scripts\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129085 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-combined-ca-bundle\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129105 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129124 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129147 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50f6a02e-ecd9-48c9-8332-806fda00af43-scripts\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129174 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-run\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129191 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-ovn-controller-tls-certs\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-lib\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129238 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79l5s\" (UniqueName: \"kubernetes.io/projected/113ad23b-2a19-4cef-a99b-7b61d3e0779f-kube-api-access-79l5s\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129259 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjbv\" (UniqueName: \"kubernetes.io/projected/50f6a02e-ecd9-48c9-8332-806fda00af43-kube-api-access-prjbv\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-etc-ovs\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129375 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-log-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.129425 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-log\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.153847 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-t8gd4"] Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.172901 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xs6fp"] Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.230973 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-lib\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231051 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79l5s\" (UniqueName: \"kubernetes.io/projected/113ad23b-2a19-4cef-a99b-7b61d3e0779f-kube-api-access-79l5s\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231087 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjbv\" (UniqueName: \"kubernetes.io/projected/50f6a02e-ecd9-48c9-8332-806fda00af43-kube-api-access-prjbv\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231119 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-etc-ovs\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231143 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-log-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231164 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-log\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231234 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/113ad23b-2a19-4cef-a99b-7b61d3e0779f-scripts\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231262 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-combined-ca-bundle\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231294 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231361 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50f6a02e-ecd9-48c9-8332-806fda00af43-scripts\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231396 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-run\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.231448 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-ovn-controller-tls-certs\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.240787 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-lib\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.241982 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-etc-ovs\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.242191 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-log-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.242195 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.242269 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-log\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.242380 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.242499 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/113ad23b-2a19-4cef-a99b-7b61d3e0779f-var-run-ovn\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.246038 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/113ad23b-2a19-4cef-a99b-7b61d3e0779f-scripts\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.246512 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/50f6a02e-ecd9-48c9-8332-806fda00af43-var-run\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.248781 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50f6a02e-ecd9-48c9-8332-806fda00af43-scripts\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.264174 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-combined-ca-bundle\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.264208 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/113ad23b-2a19-4cef-a99b-7b61d3e0779f-ovn-controller-tls-certs\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.268786 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjbv\" (UniqueName: \"kubernetes.io/projected/50f6a02e-ecd9-48c9-8332-806fda00af43-kube-api-access-prjbv\") pod \"ovn-controller-ovs-xs6fp\" (UID: \"50f6a02e-ecd9-48c9-8332-806fda00af43\") " pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.277205 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79l5s\" (UniqueName: \"kubernetes.io/projected/113ad23b-2a19-4cef-a99b-7b61d3e0779f-kube-api-access-79l5s\") pod \"ovn-controller-t8gd4\" (UID: \"113ad23b-2a19-4cef-a99b-7b61d3e0779f\") " pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: W0121 10:54:54.279088 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d9b85f9_734f_4948_8e8b_ad1a45e5c2fd.slice/crio-d56b164aa52512e4a8781d210d74c2d1dec792343c46f816383d69656163a892 WatchSource:0}: Error finding container d56b164aa52512e4a8781d210d74c2d1dec792343c46f816383d69656163a892: Status 404 returned error can't find the container with id d56b164aa52512e4a8781d210d74c2d1dec792343c46f816383d69656163a892 Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.412660 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.441487 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:54:54 crc kubenswrapper[4745]: I0121 10:54:54.826723 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd","Type":"ContainerStarted","Data":"d56b164aa52512e4a8781d210d74c2d1dec792343c46f816383d69656163a892"} Jan 21 10:54:55 crc kubenswrapper[4745]: I0121 10:54:55.532669 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-t8gd4"] Jan 21 10:54:55 crc kubenswrapper[4745]: W0121 10:54:55.562799 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod113ad23b_2a19_4cef_a99b_7b61d3e0779f.slice/crio-eb2ea83aee1edb3123377009beb37d429e6796eeb98856a0e217b7681baebe88 WatchSource:0}: Error finding container eb2ea83aee1edb3123377009beb37d429e6796eeb98856a0e217b7681baebe88: Status 404 returned error can't find the container with id eb2ea83aee1edb3123377009beb37d429e6796eeb98856a0e217b7681baebe88 Jan 21 10:54:55 crc kubenswrapper[4745]: I0121 10:54:55.853256 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-t8gd4" event={"ID":"113ad23b-2a19-4cef-a99b-7b61d3e0779f","Type":"ContainerStarted","Data":"eb2ea83aee1edb3123377009beb37d429e6796eeb98856a0e217b7681baebe88"} Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.509634 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-xs6fp"] Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.767092 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-q4h6w"] Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.770806 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.774818 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.775921 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.777117 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q4h6w"] Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912260 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovn-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912316 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912357 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d57467-feff-4abf-b152-11fe4647f21d-config\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912423 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zmrq\" (UniqueName: \"kubernetes.io/projected/a7d57467-feff-4abf-b152-11fe4647f21d-kube-api-access-4zmrq\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912448 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovs-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:56 crc kubenswrapper[4745]: I0121 10:54:56.912467 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-combined-ca-bundle\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.013763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zmrq\" (UniqueName: \"kubernetes.io/projected/a7d57467-feff-4abf-b152-11fe4647f21d-kube-api-access-4zmrq\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.013804 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovs-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.013833 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-combined-ca-bundle\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.013897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovn-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.013944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.014124 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovs-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.014189 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/a7d57467-feff-4abf-b152-11fe4647f21d-ovn-rundir\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.014241 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d57467-feff-4abf-b152-11fe4647f21d-config\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.016291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7d57467-feff-4abf-b152-11fe4647f21d-config\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.038002 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.044375 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zmrq\" (UniqueName: \"kubernetes.io/projected/a7d57467-feff-4abf-b152-11fe4647f21d-kube-api-access-4zmrq\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.051980 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7d57467-feff-4abf-b152-11fe4647f21d-combined-ca-bundle\") pod \"ovn-controller-metrics-q4h6w\" (UID: \"a7d57467-feff-4abf-b152-11fe4647f21d\") " pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:57 crc kubenswrapper[4745]: I0121 10:54:57.113904 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-q4h6w" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.404786 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.406403 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.415957 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.416206 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.416408 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-zhqgc" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.417494 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.417684 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.584454 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-config\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.584583 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.584933 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.585120 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.585206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19556673-788b-4132-97fa-616a25a67fad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.585266 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.585441 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnc2k\" (UniqueName: \"kubernetes.io/projected/19556673-788b-4132-97fa-616a25a67fad-kube-api-access-dnc2k\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.585518 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.610888 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.612386 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.615376 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.615433 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.615586 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.616362 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-96x6z" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.628950 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.687497 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnc2k\" (UniqueName: \"kubernetes.io/projected/19556673-788b-4132-97fa-616a25a67fad-kube-api-access-dnc2k\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.687634 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.687899 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-config\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.687936 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.687975 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.688044 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.688072 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19556673-788b-4132-97fa-616a25a67fad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.688096 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.689352 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.690735 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.691512 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/19556673-788b-4132-97fa-616a25a67fad-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.692979 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19556673-788b-4132-97fa-616a25a67fad-config\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.697050 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.704858 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.767419 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnc2k\" (UniqueName: \"kubernetes.io/projected/19556673-788b-4132-97fa-616a25a67fad-kube-api-access-dnc2k\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.769039 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/19556673-788b-4132-97fa-616a25a67fad-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.790933 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"19556673-788b-4132-97fa-616a25a67fad\") " pod="openstack/ovsdbserver-nb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.790619 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-config\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792348 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792416 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792469 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792531 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792612 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qvpj\" (UniqueName: \"kubernetes.io/projected/b78737fd-60ce-47e2-bfa8-92241cd4a475-kube-api-access-9qvpj\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.792693 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896652 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896788 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-config\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896850 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896898 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896919 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.896969 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.897005 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.897745 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-config\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.897992 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.898731 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b78737fd-60ce-47e2-bfa8-92241cd4a475-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.899223 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.899292 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qvpj\" (UniqueName: \"kubernetes.io/projected/b78737fd-60ce-47e2-bfa8-92241cd4a475-kube-api-access-9qvpj\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.903824 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.910685 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.923458 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qvpj\" (UniqueName: \"kubernetes.io/projected/b78737fd-60ce-47e2-bfa8-92241cd4a475-kube-api-access-9qvpj\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.929617 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:58 crc kubenswrapper[4745]: I0121 10:54:58.934410 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b78737fd-60ce-47e2-bfa8-92241cd4a475-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"b78737fd-60ce-47e2-bfa8-92241cd4a475\") " pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:59 crc kubenswrapper[4745]: I0121 10:54:59.007753 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 21 10:54:59 crc kubenswrapper[4745]: I0121 10:54:59.043179 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 21 10:55:08 crc kubenswrapper[4745]: I0121 10:55:08.009894 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xs6fp" event={"ID":"50f6a02e-ecd9-48c9-8332-806fda00af43","Type":"ContainerStarted","Data":"33e6c5e31579ec86e8eaf95e68947f71bd96b97711e57c9f0d057315ab841fab"} Jan 21 10:55:16 crc kubenswrapper[4745]: E0121 10:55:16.702264 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Jan 21 10:55:16 crc kubenswrapper[4745]: E0121 10:55:16.703013 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n664h55ch689h58h64dh5b9h55dh7dh77h695h5d8h64bh58ch545h598hcfh66ch677h55bh5f6h579h648hd8h67bh6bh88h67ch557h648h58ch55fh58cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppkl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(9253af27-9c32-4977-9632-266bb434fd18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:16 crc kubenswrapper[4745]: E0121 10:55:16.704226 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="9253af27-9c32-4977-9632-266bb434fd18" Jan 21 10:55:17 crc kubenswrapper[4745]: E0121 10:55:17.071302 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="9253af27-9c32-4977-9632-266bb434fd18" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.024255 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.024872 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6v2t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(557c4211-e324-49a4-8493-6685e4f5bee8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.026110 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.034440 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.034829 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dbv7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(4af3b414-a820-42a8-89c4-f9cade535b01): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.035999 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.084308 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" Jan 21 10:55:18 crc kubenswrapper[4745]: E0121 10:55:18.085033 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" Jan 21 10:55:24 crc kubenswrapper[4745]: I0121 10:55:24.310682 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.427747 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.428270 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5vjkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-wrpzl_openstack(b446ccbb-1565-4fcd-821b-bf826666bc07): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.430488 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" podUID="b446ccbb-1565-4fcd-821b-bf826666bc07" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.468511 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.468684 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tv9g2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gcdhw_openstack(ec267dae-af54-4295-a5a2-4dd05b1369fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.469999 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" podUID="ec267dae-af54-4295-a5a2-4dd05b1369fc" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.483685 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.484232 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5cjcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-gncqr_openstack(6d5fae24-b6a4-48e8-b83a-beee522c1a26): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.485392 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" podUID="6d5fae24-b6a4-48e8-b83a-beee522c1a26" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.492977 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.493146 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x56np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-hzfr4_openstack(092d2c26-9a6c-4402-99d6-a8cd70a198dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.494516 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" podUID="092d2c26-9a6c-4402-99d6-a8cd70a198dc" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.929632 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.929951 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n54chc6h5d8h547hdbhbch648h54h55bh55dh67h9bh5fh56h66bhd8h9dh549h5ddh545h6fh58ch57h64fh669h594hb8h58bh9h57dh5cdhbcq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79l5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-t8gd4_openstack(113ad23b-2a19-4cef-a99b-7b61d3e0779f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:55:25 crc kubenswrapper[4745]: E0121 10:55:25.931783 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-t8gd4" podUID="113ad23b-2a19-4cef-a99b-7b61d3e0779f" Jan 21 10:55:25 crc kubenswrapper[4745]: I0121 10:55:25.981310 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-q4h6w"] Jan 21 10:55:26 crc kubenswrapper[4745]: W0121 10:55:26.075171 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7d57467_feff_4abf_b152_11fe4647f21d.slice/crio-043f86a0f711cddac1f0c56793fd955146025abcda6eeaf6b6e217810600acfb WatchSource:0}: Error finding container 043f86a0f711cddac1f0c56793fd955146025abcda6eeaf6b6e217810600acfb: Status 404 returned error can't find the container with id 043f86a0f711cddac1f0c56793fd955146025abcda6eeaf6b6e217810600acfb Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.136670 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q4h6w" event={"ID":"a7d57467-feff-4abf-b152-11fe4647f21d","Type":"ContainerStarted","Data":"043f86a0f711cddac1f0c56793fd955146025abcda6eeaf6b6e217810600acfb"} Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.142009 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19556673-788b-4132-97fa-616a25a67fad","Type":"ContainerStarted","Data":"0ba86d572fadc53d617c56068fa2a21bfad4ffd26b0a2e75ced3c7cfd5be1dc9"} Jan 21 10:55:26 crc kubenswrapper[4745]: E0121 10:55:26.143632 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified\\\"\"" pod="openstack/ovn-controller-t8gd4" podUID="113ad23b-2a19-4cef-a99b-7b61d3e0779f" Jan 21 10:55:26 crc kubenswrapper[4745]: E0121 10:55:26.143807 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" podUID="6d5fae24-b6a4-48e8-b83a-beee522c1a26" Jan 21 10:55:26 crc kubenswrapper[4745]: E0121 10:55:26.144345 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" podUID="b446ccbb-1565-4fcd-821b-bf826666bc07" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.150717 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 21 10:55:26 crc kubenswrapper[4745]: W0121 10:55:26.509282 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb78737fd_60ce_47e2_bfa8_92241cd4a475.slice/crio-7dc6ae6350e4895364d73b59dd92fc1067e97ae402f098ae0eac2729bebe8c12 WatchSource:0}: Error finding container 7dc6ae6350e4895364d73b59dd92fc1067e97ae402f098ae0eac2729bebe8c12: Status 404 returned error can't find the container with id 7dc6ae6350e4895364d73b59dd92fc1067e97ae402f098ae0eac2729bebe8c12 Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.587401 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.595676 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.731557 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv9g2\" (UniqueName: \"kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2\") pod \"ec267dae-af54-4295-a5a2-4dd05b1369fc\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.731616 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config\") pod \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.731756 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x56np\" (UniqueName: \"kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np\") pod \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.731804 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config\") pod \"ec267dae-af54-4295-a5a2-4dd05b1369fc\" (UID: \"ec267dae-af54-4295-a5a2-4dd05b1369fc\") " Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.731891 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc\") pod \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\" (UID: \"092d2c26-9a6c-4402-99d6-a8cd70a198dc\") " Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.732166 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config" (OuterVolumeSpecName: "config") pod "092d2c26-9a6c-4402-99d6-a8cd70a198dc" (UID: "092d2c26-9a6c-4402-99d6-a8cd70a198dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.732829 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "092d2c26-9a6c-4402-99d6-a8cd70a198dc" (UID: "092d2c26-9a6c-4402-99d6-a8cd70a198dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.732888 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config" (OuterVolumeSpecName: "config") pod "ec267dae-af54-4295-a5a2-4dd05b1369fc" (UID: "ec267dae-af54-4295-a5a2-4dd05b1369fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.739176 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np" (OuterVolumeSpecName: "kube-api-access-x56np") pod "092d2c26-9a6c-4402-99d6-a8cd70a198dc" (UID: "092d2c26-9a6c-4402-99d6-a8cd70a198dc"). InnerVolumeSpecName "kube-api-access-x56np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.739208 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2" (OuterVolumeSpecName: "kube-api-access-tv9g2") pod "ec267dae-af54-4295-a5a2-4dd05b1369fc" (UID: "ec267dae-af54-4295-a5a2-4dd05b1369fc"). InnerVolumeSpecName "kube-api-access-tv9g2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.833572 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.833605 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv9g2\" (UniqueName: \"kubernetes.io/projected/ec267dae-af54-4295-a5a2-4dd05b1369fc-kube-api-access-tv9g2\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.833617 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/092d2c26-9a6c-4402-99d6-a8cd70a198dc-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.833626 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x56np\" (UniqueName: \"kubernetes.io/projected/092d2c26-9a6c-4402-99d6-a8cd70a198dc-kube-api-access-x56np\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:26 crc kubenswrapper[4745]: I0121 10:55:26.833635 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec267dae-af54-4295-a5a2-4dd05b1369fc-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.148920 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" event={"ID":"092d2c26-9a6c-4402-99d6-a8cd70a198dc","Type":"ContainerDied","Data":"4106e8dfe15189153482fcfdd2ddd0ff99a192565e7bed3dbfddafbe4d26e3fd"} Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.149336 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-hzfr4" Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.150763 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" event={"ID":"ec267dae-af54-4295-a5a2-4dd05b1369fc","Type":"ContainerDied","Data":"83052c34587cd32f4e8c13e40d64455507487c54fca6a9ee4cd8ed50401d85f6"} Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.150831 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gcdhw" Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.164977 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0dd4138e-532c-446d-84ba-6bf954dfbd03","Type":"ContainerStarted","Data":"433f8bd6b386746756544815a5af970cb18196b79f2a7e61dac17d026738fad4"} Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.168326 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b78737fd-60ce-47e2-bfa8-92241cd4a475","Type":"ContainerStarted","Data":"7dc6ae6350e4895364d73b59dd92fc1067e97ae402f098ae0eac2729bebe8c12"} Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.268266 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.301754 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gcdhw"] Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.318452 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:55:27 crc kubenswrapper[4745]: I0121 10:55:27.324912 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-hzfr4"] Jan 21 10:55:28 crc kubenswrapper[4745]: I0121 10:55:28.010929 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092d2c26-9a6c-4402-99d6-a8cd70a198dc" path="/var/lib/kubelet/pods/092d2c26-9a6c-4402-99d6-a8cd70a198dc/volumes" Jan 21 10:55:28 crc kubenswrapper[4745]: I0121 10:55:28.011469 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec267dae-af54-4295-a5a2-4dd05b1369fc" path="/var/lib/kubelet/pods/ec267dae-af54-4295-a5a2-4dd05b1369fc/volumes" Jan 21 10:55:28 crc kubenswrapper[4745]: I0121 10:55:28.179323 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xs6fp" event={"ID":"50f6a02e-ecd9-48c9-8332-806fda00af43","Type":"ContainerStarted","Data":"37a65ed8b965d201eeeafd40457b62633ba4ccc98206cfb24d34366a7d03c8db"} Jan 21 10:55:29 crc kubenswrapper[4745]: I0121 10:55:29.185873 4745 generic.go:334] "Generic (PLEG): container finished" podID="50f6a02e-ecd9-48c9-8332-806fda00af43" containerID="37a65ed8b965d201eeeafd40457b62633ba4ccc98206cfb24d34366a7d03c8db" exitCode=0 Jan 21 10:55:29 crc kubenswrapper[4745]: I0121 10:55:29.185909 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xs6fp" event={"ID":"50f6a02e-ecd9-48c9-8332-806fda00af43","Type":"ContainerDied","Data":"37a65ed8b965d201eeeafd40457b62633ba4ccc98206cfb24d34366a7d03c8db"} Jan 21 10:55:30 crc kubenswrapper[4745]: I0121 10:55:30.197414 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf","Type":"ContainerStarted","Data":"4177f03df9d6f5291b8905eb433528b3d04b38761bf84c08c6a05a2609b7745c"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.630130 4745 generic.go:334] "Generic (PLEG): container finished" podID="0dd4138e-532c-446d-84ba-6bf954dfbd03" containerID="433f8bd6b386746756544815a5af970cb18196b79f2a7e61dac17d026738fad4" exitCode=0 Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.630334 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0dd4138e-532c-446d-84ba-6bf954dfbd03","Type":"ContainerDied","Data":"433f8bd6b386746756544815a5af970cb18196b79f2a7e61dac17d026738fad4"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.634760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b78737fd-60ce-47e2-bfa8-92241cd4a475","Type":"ContainerStarted","Data":"dffc9f80ac707c67284e21191ba3838d4b37da91d23170c9a84cf32a9a5a4a12"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.638780 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xs6fp" event={"ID":"50f6a02e-ecd9-48c9-8332-806fda00af43","Type":"ContainerStarted","Data":"ee0f8a185a2dbef3f6308e11619a595689a1d2adba92b96140843c17bd7f5bd7"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.642245 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9253af27-9c32-4977-9632-266bb434fd18","Type":"ContainerStarted","Data":"ab195d1d4485673ef7dce3fe64614f8eb6fd2af44b68b0a68044e9d7b60fdf3e"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.642981 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.644375 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19556673-788b-4132-97fa-616a25a67fad","Type":"ContainerStarted","Data":"a1a3bb4d30117326c29856858964f0f225a287d0951a21141ecbf5555b0b0db8"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.646167 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd","Type":"ContainerStarted","Data":"2e8457d97d6f8c7a0b6f7fb524f7691d6db22f51ec5ca02805da55e3707b3daa"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.646328 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.648096 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-q4h6w" event={"ID":"a7d57467-feff-4abf-b152-11fe4647f21d","Type":"ContainerStarted","Data":"1aa9763dfa8c3e0f8607f50d4aba3b5098d5f3d1dbc9af44fb210a0654454a95"} Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.707149 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-q4h6w" podStartSLOduration=31.849309533 podStartE2EDuration="35.707131045s" podCreationTimestamp="2026-01-21 10:54:56 +0000 UTC" firstStartedPulling="2026-01-21 10:55:26.080630053 +0000 UTC m=+1110.541417671" lastFinishedPulling="2026-01-21 10:55:29.938451585 +0000 UTC m=+1114.399239183" observedRunningTime="2026-01-21 10:55:31.684295084 +0000 UTC m=+1116.145082682" watchObservedRunningTime="2026-01-21 10:55:31.707131045 +0000 UTC m=+1116.167918643" Jan 21 10:55:31 crc kubenswrapper[4745]: I0121 10:55:31.718381 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.237197459 podStartE2EDuration="40.718363631s" podCreationTimestamp="2026-01-21 10:54:51 +0000 UTC" firstStartedPulling="2026-01-21 10:54:54.290020323 +0000 UTC m=+1078.750807921" lastFinishedPulling="2026-01-21 10:55:29.771186495 +0000 UTC m=+1114.231974093" observedRunningTime="2026-01-21 10:55:31.70806665 +0000 UTC m=+1116.168854248" watchObservedRunningTime="2026-01-21 10:55:31.718363631 +0000 UTC m=+1116.179151229" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.619683 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=4.964301439 podStartE2EDuration="44.619657704s" podCreationTimestamp="2026-01-21 10:54:48 +0000 UTC" firstStartedPulling="2026-01-21 10:54:50.273892286 +0000 UTC m=+1074.734679884" lastFinishedPulling="2026-01-21 10:55:29.929248551 +0000 UTC m=+1114.390036149" observedRunningTime="2026-01-21 10:55:31.746228261 +0000 UTC m=+1116.207015859" watchObservedRunningTime="2026-01-21 10:55:32.619657704 +0000 UTC m=+1117.080445302" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.626713 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.738624 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.740267 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.742521 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.771203 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.794091 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggv5b\" (UniqueName: \"kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.794419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.794823 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.794943 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.897366 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.897456 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggv5b\" (UniqueName: \"kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.897509 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.897591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.898639 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.899079 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.899806 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:32 crc kubenswrapper[4745]: I0121 10:55:32.941505 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggv5b\" (UniqueName: \"kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b\") pod \"dnsmasq-dns-7fd796d7df-vnhmq\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.068719 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.084430 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.099469 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.110492 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.113476 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.122937 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.222087 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.338330 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5cjcn\" (UniqueName: \"kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn\") pod \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.338493 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc\") pod \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.338704 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config\") pod \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\" (UID: \"6d5fae24-b6a4-48e8-b83a-beee522c1a26\") " Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339097 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339141 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339197 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339314 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6d5fae24-b6a4-48e8-b83a-beee522c1a26" (UID: "6d5fae24-b6a4-48e8-b83a-beee522c1a26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339378 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339548 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb2l8\" (UniqueName: \"kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339688 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config" (OuterVolumeSpecName: "config") pod "6d5fae24-b6a4-48e8-b83a-beee522c1a26" (UID: "6d5fae24-b6a4-48e8-b83a-beee522c1a26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.339801 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.362870 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn" (OuterVolumeSpecName: "kube-api-access-5cjcn") pod "6d5fae24-b6a4-48e8-b83a-beee522c1a26" (UID: "6d5fae24-b6a4-48e8-b83a-beee522c1a26"). InnerVolumeSpecName "kube-api-access-5cjcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442428 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442477 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb2l8\" (UniqueName: \"kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442559 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442589 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442611 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442690 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6d5fae24-b6a4-48e8-b83a-beee522c1a26-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.442702 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5cjcn\" (UniqueName: \"kubernetes.io/projected/6d5fae24-b6a4-48e8-b83a-beee522c1a26-kube-api-access-5cjcn\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.443572 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.443916 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.444210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.444461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.466504 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb2l8\" (UniqueName: \"kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8\") pod \"dnsmasq-dns-86db49b7ff-8jqsw\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.518205 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.634372 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:55:33 crc kubenswrapper[4745]: W0121 10:55:33.645874 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9fae0a3_ae1c_4c51_8632_13424ad116f6.slice/crio-3fc2785e116dddbade5161ffa6cef1be1738723849d01b3e7e72f65d5ed1df8a WatchSource:0}: Error finding container 3fc2785e116dddbade5161ffa6cef1be1738723849d01b3e7e72f65d5ed1df8a: Status 404 returned error can't find the container with id 3fc2785e116dddbade5161ffa6cef1be1738723849d01b3e7e72f65d5ed1df8a Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.753027 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" event={"ID":"a9fae0a3-ae1c-4c51-8632-13424ad116f6","Type":"ContainerStarted","Data":"3fc2785e116dddbade5161ffa6cef1be1738723849d01b3e7e72f65d5ed1df8a"} Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.755097 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" event={"ID":"6d5fae24-b6a4-48e8-b83a-beee522c1a26","Type":"ContainerDied","Data":"fa380c76a650cdc010dfa42985b585bb69823cd7a1f490772926fdbb4f26cbcb"} Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.755206 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-gncqr" Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.833230 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:55:33 crc kubenswrapper[4745]: I0121 10:55:33.838974 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-gncqr"] Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.013808 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5fae24-b6a4-48e8-b83a-beee522c1a26" path="/var/lib/kubelet/pods/6d5fae24-b6a4-48e8-b83a-beee522c1a26/volumes" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.226707 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.234901 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:55:34 crc kubenswrapper[4745]: W0121 10:55:34.241997 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc130b339_04cf_40b3_bb1b_5354c12cece1.slice/crio-3d7c442dc357e13e09415f2199f9e03911ddd81ed24244eca2b01460fa3cbf3a WatchSource:0}: Error finding container 3d7c442dc357e13e09415f2199f9e03911ddd81ed24244eca2b01460fa3cbf3a: Status 404 returned error can't find the container with id 3d7c442dc357e13e09415f2199f9e03911ddd81ed24244eca2b01460fa3cbf3a Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258071 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config\") pod \"b446ccbb-1565-4fcd-821b-bf826666bc07\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258521 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc\") pod \"b446ccbb-1565-4fcd-821b-bf826666bc07\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258560 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vjkf\" (UniqueName: \"kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf\") pod \"b446ccbb-1565-4fcd-821b-bf826666bc07\" (UID: \"b446ccbb-1565-4fcd-821b-bf826666bc07\") " Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258718 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config" (OuterVolumeSpecName: "config") pod "b446ccbb-1565-4fcd-821b-bf826666bc07" (UID: "b446ccbb-1565-4fcd-821b-bf826666bc07"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258923 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.258987 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b446ccbb-1565-4fcd-821b-bf826666bc07" (UID: "b446ccbb-1565-4fcd-821b-bf826666bc07"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.262648 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf" (OuterVolumeSpecName: "kube-api-access-5vjkf") pod "b446ccbb-1565-4fcd-821b-bf826666bc07" (UID: "b446ccbb-1565-4fcd-821b-bf826666bc07"). InnerVolumeSpecName "kube-api-access-5vjkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.360503 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b446ccbb-1565-4fcd-821b-bf826666bc07-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.360561 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vjkf\" (UniqueName: \"kubernetes.io/projected/b446ccbb-1565-4fcd-821b-bf826666bc07-kube-api-access-5vjkf\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.767760 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"19556673-788b-4132-97fa-616a25a67fad","Type":"ContainerStarted","Data":"06b38e1a08ed437b07fd69e5279a9c179811329f1bd04b44c10ab342d115c6cf"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.771577 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" event={"ID":"c130b339-04cf-40b3-bb1b-5354c12cece1","Type":"ContainerStarted","Data":"3d7c442dc357e13e09415f2199f9e03911ddd81ed24244eca2b01460fa3cbf3a"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.773146 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" event={"ID":"b446ccbb-1565-4fcd-821b-bf826666bc07","Type":"ContainerDied","Data":"87d448a3256ff1132c83152f7ae61773c3e569bfeaa913c4bb463f08d9b80ae6"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.773182 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-wrpzl" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.775305 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"0dd4138e-532c-446d-84ba-6bf954dfbd03","Type":"ContainerStarted","Data":"325c46918272d1752663332a2da1f5cf478045cdab4db9f99fc1236846f8e6c3"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.778341 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerID="d5fbc0b1165cfb7c33d8801312a8c92b68dea3a5712c2df5626a72e8b2a32131" exitCode=0 Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.778403 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" event={"ID":"a9fae0a3-ae1c-4c51-8632-13424ad116f6","Type":"ContainerDied","Data":"d5fbc0b1165cfb7c33d8801312a8c92b68dea3a5712c2df5626a72e8b2a32131"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.782127 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"b78737fd-60ce-47e2-bfa8-92241cd4a475","Type":"ContainerStarted","Data":"79cdd7c6c5e2a3dc581d99fe5ce6fb184961d6cf50774df6d26bf43e34fb5476"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.785023 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-xs6fp" event={"ID":"50f6a02e-ecd9-48c9-8332-806fda00af43","Type":"ContainerStarted","Data":"cd1a21b9e991c04197861d91221614543112d75ae71726707083a91d6a090387"} Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.785255 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.785272 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.805747 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=33.653090375 podStartE2EDuration="37.805728514s" podCreationTimestamp="2026-01-21 10:54:57 +0000 UTC" firstStartedPulling="2026-01-21 10:55:25.613766744 +0000 UTC m=+1110.074554342" lastFinishedPulling="2026-01-21 10:55:29.766404883 +0000 UTC m=+1114.227192481" observedRunningTime="2026-01-21 10:55:34.799170627 +0000 UTC m=+1119.259958225" watchObservedRunningTime="2026-01-21 10:55:34.805728514 +0000 UTC m=+1119.266516112" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.826117 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=15.949895203 podStartE2EDuration="47.826099864s" podCreationTimestamp="2026-01-21 10:54:47 +0000 UTC" firstStartedPulling="2026-01-21 10:54:50.273827175 +0000 UTC m=+1074.734614773" lastFinishedPulling="2026-01-21 10:55:22.150031836 +0000 UTC m=+1106.610819434" observedRunningTime="2026-01-21 10:55:34.815203026 +0000 UTC m=+1119.275990624" watchObservedRunningTime="2026-01-21 10:55:34.826099864 +0000 UTC m=+1119.286887462" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.862105 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-xs6fp" podStartSLOduration=22.02322003 podStartE2EDuration="40.862088009s" podCreationTimestamp="2026-01-21 10:54:54 +0000 UTC" firstStartedPulling="2026-01-21 10:55:07.215744682 +0000 UTC m=+1091.676532300" lastFinishedPulling="2026-01-21 10:55:26.054612681 +0000 UTC m=+1110.515400279" observedRunningTime="2026-01-21 10:55:34.861358041 +0000 UTC m=+1119.322145639" watchObservedRunningTime="2026-01-21 10:55:34.862088009 +0000 UTC m=+1119.322875607" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.894621 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=34.618317565 podStartE2EDuration="37.894599278s" podCreationTimestamp="2026-01-21 10:54:57 +0000 UTC" firstStartedPulling="2026-01-21 10:55:26.513335002 +0000 UTC m=+1110.974122600" lastFinishedPulling="2026-01-21 10:55:29.789616715 +0000 UTC m=+1114.250404313" observedRunningTime="2026-01-21 10:55:34.885357843 +0000 UTC m=+1119.346145441" watchObservedRunningTime="2026-01-21 10:55:34.894599278 +0000 UTC m=+1119.355386876" Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.925416 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:55:34 crc kubenswrapper[4745]: I0121 10:55:34.942169 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-wrpzl"] Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.013661 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.044254 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.057120 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.095451 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.792118 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" event={"ID":"a9fae0a3-ae1c-4c51-8632-13424ad116f6","Type":"ContainerStarted","Data":"19cd48dca76c9f4d7360ca191f814cf506d4569b7cf1bdcaee9332a0044d824f"} Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.793396 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.794852 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerStarted","Data":"c6f7996113b4bddd9c946091c6d575b94b2e4d227cbd53bacf0332274d5d275c"} Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.797108 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerStarted","Data":"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1"} Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.799081 4745 generic.go:334] "Generic (PLEG): container finished" podID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerID="5aafb6c6b33705ac7babfa4394b3117ee4ef3eb7f7affd7ec44d6385a6f8d365" exitCode=0 Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.799140 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" event={"ID":"c130b339-04cf-40b3-bb1b-5354c12cece1","Type":"ContainerDied","Data":"5aafb6c6b33705ac7babfa4394b3117ee4ef3eb7f7affd7ec44d6385a6f8d365"} Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.799428 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.799467 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.836823 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" podStartSLOduration=3.271443574 podStartE2EDuration="3.836802551s" podCreationTimestamp="2026-01-21 10:55:32 +0000 UTC" firstStartedPulling="2026-01-21 10:55:33.742192031 +0000 UTC m=+1118.202979629" lastFinishedPulling="2026-01-21 10:55:34.307550988 +0000 UTC m=+1118.768338606" observedRunningTime="2026-01-21 10:55:35.827191497 +0000 UTC m=+1120.287979105" watchObservedRunningTime="2026-01-21 10:55:35.836802551 +0000 UTC m=+1120.297590149" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.906921 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 21 10:55:35 crc kubenswrapper[4745]: I0121 10:55:35.915419 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.028402 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b446ccbb-1565-4fcd-821b-bf826666bc07" path="/var/lib/kubelet/pods/b446ccbb-1565-4fcd-821b-bf826666bc07/volumes" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.312433 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.314062 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.316007 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.317512 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.318031 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.318159 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4f9vh" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.346094 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400010 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgv45\" (UniqueName: \"kubernetes.io/projected/9455e114-6033-43af-960e-65da0f232984-kube-api-access-dgv45\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400115 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-config\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400140 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400182 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400230 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9455e114-6033-43af-960e-65da0f232984-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400258 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-scripts\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.400300 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501372 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-config\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501461 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501480 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9455e114-6033-43af-960e-65da0f232984-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501496 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-scripts\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501519 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.501626 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgv45\" (UniqueName: \"kubernetes.io/projected/9455e114-6033-43af-960e-65da0f232984-kube-api-access-dgv45\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.502593 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-config\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.503424 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9455e114-6033-43af-960e-65da0f232984-scripts\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.503817 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9455e114-6033-43af-960e-65da0f232984-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.507661 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.508831 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.520414 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9455e114-6033-43af-960e-65da0f232984-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.522510 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgv45\" (UniqueName: \"kubernetes.io/projected/9455e114-6033-43af-960e-65da0f232984-kube-api-access-dgv45\") pod \"ovn-northd-0\" (UID: \"9455e114-6033-43af-960e-65da0f232984\") " pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.631125 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.810125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" event={"ID":"c130b339-04cf-40b3-bb1b-5354c12cece1","Type":"ContainerStarted","Data":"6dd2eaa091ce28e37fe722ba56b86b3fca7077a7bd70a6442973a08aae8bf7db"} Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.810949 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:36 crc kubenswrapper[4745]: I0121 10:55:36.845713 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" podStartSLOduration=4.383256018 podStartE2EDuration="4.845689945s" podCreationTimestamp="2026-01-21 10:55:32 +0000 UTC" firstStartedPulling="2026-01-21 10:55:34.245025115 +0000 UTC m=+1118.705812713" lastFinishedPulling="2026-01-21 10:55:34.707459042 +0000 UTC m=+1119.168246640" observedRunningTime="2026-01-21 10:55:36.836253494 +0000 UTC m=+1121.297041092" watchObservedRunningTime="2026-01-21 10:55:36.845689945 +0000 UTC m=+1121.306477543" Jan 21 10:55:37 crc kubenswrapper[4745]: I0121 10:55:37.138743 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 21 10:55:37 crc kubenswrapper[4745]: W0121 10:55:37.150084 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9455e114_6033_43af_960e_65da0f232984.slice/crio-649b8e74306a195657b69d54d65dd8986516cf3ea713bdb337552bdf765dd961 WatchSource:0}: Error finding container 649b8e74306a195657b69d54d65dd8986516cf3ea713bdb337552bdf765dd961: Status 404 returned error can't find the container with id 649b8e74306a195657b69d54d65dd8986516cf3ea713bdb337552bdf765dd961 Jan 21 10:55:37 crc kubenswrapper[4745]: I0121 10:55:37.819295 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9455e114-6033-43af-960e-65da0f232984","Type":"ContainerStarted","Data":"649b8e74306a195657b69d54d65dd8986516cf3ea713bdb337552bdf765dd961"} Jan 21 10:55:37 crc kubenswrapper[4745]: I0121 10:55:37.823278 4745 generic.go:334] "Generic (PLEG): container finished" podID="c2b5df3e-a44d-42ff-96a4-2bfd32db45bf" containerID="4177f03df9d6f5291b8905eb433528b3d04b38761bf84c08c6a05a2609b7745c" exitCode=0 Jan 21 10:55:37 crc kubenswrapper[4745]: I0121 10:55:37.824325 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf","Type":"ContainerDied","Data":"4177f03df9d6f5291b8905eb433528b3d04b38761bf84c08c6a05a2609b7745c"} Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.835008 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c2b5df3e-a44d-42ff-96a4-2bfd32db45bf","Type":"ContainerStarted","Data":"686f5fd43d38b775afffa85da7b63ee82119311a902a0f19e8e57fedbba5288f"} Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.840122 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-t8gd4" event={"ID":"113ad23b-2a19-4cef-a99b-7b61d3e0779f","Type":"ContainerStarted","Data":"c094f1ecebffa70c695f71a793ae6f05657c3c8e9de6b35f5f2761608920f135"} Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.840776 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-t8gd4" Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.844840 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9455e114-6033-43af-960e-65da0f232984","Type":"ContainerStarted","Data":"47ad5deb30de28d6e3720f62b3d3b06d14982f6f4e116c840b55184cffc4bafd"} Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.844891 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"9455e114-6033-43af-960e-65da0f232984","Type":"ContainerStarted","Data":"1b73af09b2a2d2af2ca22b45325e5425a0a2fe8be5806b6dc3c0aa07825066cd"} Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.845101 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.866279 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=15.552415442000001 podStartE2EDuration="52.866262751s" podCreationTimestamp="2026-01-21 10:54:46 +0000 UTC" firstStartedPulling="2026-01-21 10:54:48.740223267 +0000 UTC m=+1073.201010865" lastFinishedPulling="2026-01-21 10:55:26.054070576 +0000 UTC m=+1110.514858174" observedRunningTime="2026-01-21 10:55:38.862765562 +0000 UTC m=+1123.323553160" watchObservedRunningTime="2026-01-21 10:55:38.866262751 +0000 UTC m=+1123.327050349" Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.925470 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.805292991 podStartE2EDuration="2.925453178s" podCreationTimestamp="2026-01-21 10:55:36 +0000 UTC" firstStartedPulling="2026-01-21 10:55:37.15221905 +0000 UTC m=+1121.613006648" lastFinishedPulling="2026-01-21 10:55:38.272379247 +0000 UTC m=+1122.733166835" observedRunningTime="2026-01-21 10:55:38.893138394 +0000 UTC m=+1123.353925992" watchObservedRunningTime="2026-01-21 10:55:38.925453178 +0000 UTC m=+1123.386240776" Jan 21 10:55:38 crc kubenswrapper[4745]: I0121 10:55:38.926420 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-t8gd4" podStartSLOduration=2.702571425 podStartE2EDuration="44.926414712s" podCreationTimestamp="2026-01-21 10:54:54 +0000 UTC" firstStartedPulling="2026-01-21 10:54:55.588288049 +0000 UTC m=+1080.049075647" lastFinishedPulling="2026-01-21 10:55:37.812131336 +0000 UTC m=+1122.272918934" observedRunningTime="2026-01-21 10:55:38.922878432 +0000 UTC m=+1123.383666030" watchObservedRunningTime="2026-01-21 10:55:38.926414712 +0000 UTC m=+1123.387202310" Jan 21 10:55:39 crc kubenswrapper[4745]: I0121 10:55:39.224866 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 21 10:55:39 crc kubenswrapper[4745]: I0121 10:55:39.294572 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 21 10:55:39 crc kubenswrapper[4745]: I0121 10:55:39.294702 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 21 10:55:39 crc kubenswrapper[4745]: I0121 10:55:39.452697 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 21 10:55:39 crc kubenswrapper[4745]: I0121 10:55:39.945936 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.838729 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.839238 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="dnsmasq-dns" containerID="cri-o://19cd48dca76c9f4d7360ca191f814cf506d4569b7cf1bdcaee9332a0044d824f" gracePeriod=10 Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.840745 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.887670 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.889542 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:41 crc kubenswrapper[4745]: I0121 10:55:41.991689 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.042379 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5477\" (UniqueName: \"kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.042476 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.042550 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.042582 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.042643 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.144432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5477\" (UniqueName: \"kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.144550 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.144596 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.144636 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.144689 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.145743 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.146976 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.147123 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.148355 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.166973 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5477\" (UniqueName: \"kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477\") pod \"dnsmasq-dns-698758b865-nhwts\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.219426 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.229188 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.810604 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.933133 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerID="19cd48dca76c9f4d7360ca191f814cf506d4569b7cf1bdcaee9332a0044d824f" exitCode=0 Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.933225 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" event={"ID":"a9fae0a3-ae1c-4c51-8632-13424ad116f6","Type":"ContainerDied","Data":"19cd48dca76c9f4d7360ca191f814cf506d4569b7cf1bdcaee9332a0044d824f"} Jan 21 10:55:42 crc kubenswrapper[4745]: I0121 10:55:42.945636 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nhwts" event={"ID":"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51","Type":"ContainerStarted","Data":"36b47bc0a8eb5bb6bf0f68682fef86c4129f3eab61142b2405f84c9a7ea8e83f"} Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.064921 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.095437 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.095831 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="init" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.095850 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="init" Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.095886 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="dnsmasq-dns" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.095893 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="dnsmasq-dns" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.096086 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" containerName="dnsmasq-dns" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.101034 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.104695 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-zm787" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.104745 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.105121 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.107370 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.160121 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.220004 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config\") pod \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.220830 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb\") pod \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.220971 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc\") pod \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.221062 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggv5b\" (UniqueName: \"kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b\") pod \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\" (UID: \"a9fae0a3-ae1c-4c51-8632-13424ad116f6\") " Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.221303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-cache\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.221413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.221515 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-lock\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.222596 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw6g9\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-kube-api-access-cw6g9\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.222725 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.247761 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b" (OuterVolumeSpecName: "kube-api-access-ggv5b") pod "a9fae0a3-ae1c-4c51-8632-13424ad116f6" (UID: "a9fae0a3-ae1c-4c51-8632-13424ad116f6"). InnerVolumeSpecName "kube-api-access-ggv5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.280660 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9fae0a3-ae1c-4c51-8632-13424ad116f6" (UID: "a9fae0a3-ae1c-4c51-8632-13424ad116f6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.318095 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9fae0a3-ae1c-4c51-8632-13424ad116f6" (UID: "a9fae0a3-ae1c-4c51-8632-13424ad116f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328417 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cw6g9\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-kube-api-access-cw6g9\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328482 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328544 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-cache\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328579 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328623 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-lock\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328671 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328684 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggv5b\" (UniqueName: \"kubernetes.io/projected/a9fae0a3-ae1c-4c51-8632-13424ad116f6-kube-api-access-ggv5b\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.328695 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.329112 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-lock\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.329379 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.329392 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.329423 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:55:43.829408649 +0000 UTC m=+1128.290196247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.332942 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.334212 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config" (OuterVolumeSpecName: "config") pod "a9fae0a3-ae1c-4c51-8632-13424ad116f6" (UID: "a9fae0a3-ae1c-4c51-8632-13424ad116f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.346021 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e3c32d66-7e7d-40dc-8726-2084e85452af-cache\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.346645 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cw6g9\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-kube-api-access-cw6g9\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.377430 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.430726 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9fae0a3-ae1c-4c51-8632-13424ad116f6-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.519693 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.837435 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.837730 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.837755 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: E0121 10:55:43.837825 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:55:44.837808117 +0000 UTC m=+1129.298595735 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.954555 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nhwts" event={"ID":"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51","Type":"ContainerDied","Data":"3d884056ea34c96cc5c316f140aa10193d87244478f4631ce7958fbb2871e895"} Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.954488 4745 generic.go:334] "Generic (PLEG): container finished" podID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerID="3d884056ea34c96cc5c316f140aa10193d87244478f4631ce7958fbb2871e895" exitCode=0 Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.959029 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" event={"ID":"a9fae0a3-ae1c-4c51-8632-13424ad116f6","Type":"ContainerDied","Data":"3fc2785e116dddbade5161ffa6cef1be1738723849d01b3e7e72f65d5ed1df8a"} Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.959292 4745 scope.go:117] "RemoveContainer" containerID="19cd48dca76c9f4d7360ca191f814cf506d4569b7cf1bdcaee9332a0044d824f" Jan 21 10:55:43 crc kubenswrapper[4745]: I0121 10:55:43.959322 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:55:44 crc kubenswrapper[4745]: I0121 10:55:44.093675 4745 scope.go:117] "RemoveContainer" containerID="d5fbc0b1165cfb7c33d8801312a8c92b68dea3a5712c2df5626a72e8b2a32131" Jan 21 10:55:44 crc kubenswrapper[4745]: I0121 10:55:44.852238 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:44 crc kubenswrapper[4745]: E0121 10:55:44.852571 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:44 crc kubenswrapper[4745]: E0121 10:55:44.853045 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:44 crc kubenswrapper[4745]: E0121 10:55:44.853151 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:55:46.853135463 +0000 UTC m=+1131.313923061 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:44 crc kubenswrapper[4745]: I0121 10:55:44.970173 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nhwts" event={"ID":"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51","Type":"ContainerStarted","Data":"654ada3e56d569575b8e13e6e0679120caa50187c4c8de7bd0af946f3a175d26"} Jan 21 10:55:44 crc kubenswrapper[4745]: I0121 10:55:44.971722 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:44 crc kubenswrapper[4745]: I0121 10:55:44.993424 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nhwts" podStartSLOduration=3.993402975 podStartE2EDuration="3.993402975s" podCreationTimestamp="2026-01-21 10:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:55:44.988071539 +0000 UTC m=+1129.448859157" watchObservedRunningTime="2026-01-21 10:55:44.993402975 +0000 UTC m=+1129.454190583" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.895950 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:46 crc kubenswrapper[4745]: E0121 10:55:46.896279 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:46 crc kubenswrapper[4745]: E0121 10:55:46.896323 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:46 crc kubenswrapper[4745]: E0121 10:55:46.896458 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:55:50.896400836 +0000 UTC m=+1135.357188434 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.924682 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ssbfp"] Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.925855 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ssbfp" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.930170 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.930847 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.934889 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.965017 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ssbfp"] Jan 21 10:55:46 crc kubenswrapper[4745]: E0121 10:55:46.965645 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-z9lpq ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-z9lpq ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-ssbfp" podUID="4a711db6-0c34-43f5-8f06-e164199cdbac" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.973833 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-gqgp7"] Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.974795 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.986595 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ssbfp"] Jan 21 10:55:46 crc kubenswrapper[4745]: I0121 10:55:46.988122 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ssbfp" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.014454 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ssbfp" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.021440 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gqgp7"] Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108059 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108172 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108348 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxxnf\" (UniqueName: \"kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108459 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108556 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108643 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.108677 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210488 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210562 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210603 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210664 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210681 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210727 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxxnf\" (UniqueName: \"kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.210757 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.211504 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.212743 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.215497 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.219582 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.238686 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.241840 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.244810 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxxnf\" (UniqueName: \"kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf\") pod \"swift-ring-rebalance-gqgp7\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.289208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.613578 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.613962 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.709577 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-t9kdw"] Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.710974 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.713631 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.730073 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t9kdw"] Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.755289 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.773209 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-gqgp7"] Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.822270 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.822419 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wbsg\" (UniqueName: \"kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.924515 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.924892 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wbsg\" (UniqueName: \"kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.926053 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.948015 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wbsg\" (UniqueName: \"kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg\") pod \"root-account-create-update-t9kdw\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.997661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gqgp7" event={"ID":"47f91446-e767-4f28-b77a-e77a7b9cd842","Type":"ContainerStarted","Data":"8e2899001c5c06d8308185b5b6c251f924d1161bc61d1ca6f8a73bc72c5967f5"} Jan 21 10:55:47 crc kubenswrapper[4745]: I0121 10:55:47.997839 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ssbfp" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.035077 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.050695 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-ssbfp"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.062474 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-ssbfp"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.108642 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.600434 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-t9kdw"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.789738 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-6bqnp"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.790794 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.797371 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6bqnp"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.885229 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-4083-account-create-update-lc4t4"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.886268 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.888849 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.896697 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4083-account-create-update-lc4t4"] Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.954631 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:48 crc kubenswrapper[4745]: I0121 10:55:48.954685 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g25kp\" (UniqueName: \"kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.004620 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9kdw" event={"ID":"1867cabd-41a8-413a-9d37-943e5661a535","Type":"ContainerStarted","Data":"f85d7351d2a7805e41089c6c60325c0789e868ec29d97099d6f519c2b65f6b63"} Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.004668 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9kdw" event={"ID":"1867cabd-41a8-413a-9d37-943e5661a535","Type":"ContainerStarted","Data":"59d78e0014061158c062489d269c65a751819fcecf85ff592407c94ec19702bd"} Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.026282 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-t9kdw" podStartSLOduration=2.026262596 podStartE2EDuration="2.026262596s" podCreationTimestamp="2026-01-21 10:55:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:55:49.020414357 +0000 UTC m=+1133.481201955" watchObservedRunningTime="2026-01-21 10:55:49.026262596 +0000 UTC m=+1133.487050194" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.062463 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.062545 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.062579 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g25kp\" (UniqueName: \"kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.062646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hcmx\" (UniqueName: \"kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.064071 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.083478 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-pcgvc"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.084499 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.094716 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g25kp\" (UniqueName: \"kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp\") pod \"keystone-db-create-6bqnp\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.109685 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.124282 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pcgvc"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.166225 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hcmx\" (UniqueName: \"kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.167128 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.173700 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.200757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hcmx\" (UniqueName: \"kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx\") pod \"keystone-4083-account-create-update-lc4t4\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.208442 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.218058 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-0d34-account-create-update-lmqpb"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.219185 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.221220 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.233267 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0d34-account-create-update-lmqpb"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.277016 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmktx\" (UniqueName: \"kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.277180 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.381015 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmktx\" (UniqueName: \"kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.381255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.381468 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.381735 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p65cz\" (UniqueName: \"kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.382681 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.432489 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmktx\" (UniqueName: \"kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx\") pod \"placement-db-create-pcgvc\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.448679 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-zdp4x"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.449707 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.459864 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.485376 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.485614 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p65cz\" (UniqueName: \"kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.490451 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.511052 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zdp4x"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.528919 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-4b85-account-create-update-m8v62"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.531295 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.535197 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.549664 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b85-account-create-update-m8v62"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.551357 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p65cz\" (UniqueName: \"kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz\") pod \"placement-0d34-account-create-update-lmqpb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.596208 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhgs5\" (UniqueName: \"kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.596407 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.617024 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.702196 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhgs5\" (UniqueName: \"kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.702288 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mkx\" (UniqueName: \"kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.702335 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.702387 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.703489 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.723362 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhgs5\" (UniqueName: \"kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5\") pod \"glance-db-create-zdp4x\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.792100 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.804845 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mkx\" (UniqueName: \"kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.804939 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.805659 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.830383 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mkx\" (UniqueName: \"kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx\") pod \"glance-4b85-account-create-update-m8v62\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.875745 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-6bqnp"] Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.896626 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:49 crc kubenswrapper[4745]: I0121 10:55:49.991366 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-4083-account-create-update-lc4t4"] Jan 21 10:55:50 crc kubenswrapper[4745]: I0121 10:55:50.031316 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a711db6-0c34-43f5-8f06-e164199cdbac" path="/var/lib/kubelet/pods/4a711db6-0c34-43f5-8f06-e164199cdbac/volumes" Jan 21 10:55:50 crc kubenswrapper[4745]: I0121 10:55:50.049647 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6bqnp" event={"ID":"f4773a81-6741-4319-8bb6-e4ec0badc52b","Type":"ContainerStarted","Data":"90c74a9403c98ef9b55564ec9155922d8aa5bfe795d22443e7512df0e725bd98"} Jan 21 10:55:50 crc kubenswrapper[4745]: I0121 10:55:50.055473 4745 generic.go:334] "Generic (PLEG): container finished" podID="1867cabd-41a8-413a-9d37-943e5661a535" containerID="f85d7351d2a7805e41089c6c60325c0789e868ec29d97099d6f519c2b65f6b63" exitCode=0 Jan 21 10:55:50 crc kubenswrapper[4745]: I0121 10:55:50.055653 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9kdw" event={"ID":"1867cabd-41a8-413a-9d37-943e5661a535","Type":"ContainerDied","Data":"f85d7351d2a7805e41089c6c60325c0789e868ec29d97099d6f519c2b65f6b63"} Jan 21 10:55:50 crc kubenswrapper[4745]: I0121 10:55:50.148629 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-pcgvc"] Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:50.360910 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-0d34-account-create-update-lmqpb"] Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:50.474598 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-zdp4x"] Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:50.620895 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-4b85-account-create-update-m8v62"] Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:50.932821 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:51 crc kubenswrapper[4745]: E0121 10:55:50.933188 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:51 crc kubenswrapper[4745]: E0121 10:55:50.933235 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:51 crc kubenswrapper[4745]: E0121 10:55:50.933324 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:55:58.9332965 +0000 UTC m=+1143.394084098 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:51.072191 4745 generic.go:334] "Generic (PLEG): container finished" podID="f4773a81-6741-4319-8bb6-e4ec0badc52b" containerID="123244fabb89d9ac2d241710054e9e1ea4357e315ccd2d53fc74549cc26e462b" exitCode=0 Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:51.072313 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6bqnp" event={"ID":"f4773a81-6741-4319-8bb6-e4ec0badc52b","Type":"ContainerDied","Data":"123244fabb89d9ac2d241710054e9e1ea4357e315ccd2d53fc74549cc26e462b"} Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:51.074183 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4083-account-create-update-lc4t4" event={"ID":"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f","Type":"ContainerStarted","Data":"ff6c76fee709e03e539bc8b694a1112266cbc81f371b855b094f791a88de731b"} Jan 21 10:55:51 crc kubenswrapper[4745]: W0121 10:55:51.466073 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd98a0e3_2cd0_48f9_a24c_bd485fe3a3cf.slice/crio-8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2 WatchSource:0}: Error finding container 8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2: Status 404 returned error can't find the container with id 8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2 Jan 21 10:55:51 crc kubenswrapper[4745]: I0121 10:55:51.714548 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 21 10:55:52 crc kubenswrapper[4745]: I0121 10:55:52.088251 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcgvc" event={"ID":"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf","Type":"ContainerStarted","Data":"8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2"} Jan 21 10:55:52 crc kubenswrapper[4745]: I0121 10:55:52.227709 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:55:52 crc kubenswrapper[4745]: I0121 10:55:52.321192 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:52 crc kubenswrapper[4745]: I0121 10:55:52.321605 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="dnsmasq-dns" containerID="cri-o://6dd2eaa091ce28e37fe722ba56b86b3fca7077a7bd70a6442973a08aae8bf7db" gracePeriod=10 Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.103857 4745 generic.go:334] "Generic (PLEG): container finished" podID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerID="6dd2eaa091ce28e37fe722ba56b86b3fca7077a7bd70a6442973a08aae8bf7db" exitCode=0 Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.103922 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" event={"ID":"c130b339-04cf-40b3-bb1b-5354c12cece1","Type":"ContainerDied","Data":"6dd2eaa091ce28e37fe722ba56b86b3fca7077a7bd70a6442973a08aae8bf7db"} Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.643419 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.663332 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts\") pod \"1867cabd-41a8-413a-9d37-943e5661a535\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.663448 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wbsg\" (UniqueName: \"kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg\") pod \"1867cabd-41a8-413a-9d37-943e5661a535\" (UID: \"1867cabd-41a8-413a-9d37-943e5661a535\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.664480 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1867cabd-41a8-413a-9d37-943e5661a535" (UID: "1867cabd-41a8-413a-9d37-943e5661a535"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.666937 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1867cabd-41a8-413a-9d37-943e5661a535-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.682954 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg" (OuterVolumeSpecName: "kube-api-access-6wbsg") pod "1867cabd-41a8-413a-9d37-943e5661a535" (UID: "1867cabd-41a8-413a-9d37-943e5661a535"). InnerVolumeSpecName "kube-api-access-6wbsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.741385 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.767789 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts\") pod \"f4773a81-6741-4319-8bb6-e4ec0badc52b\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.767883 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g25kp\" (UniqueName: \"kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp\") pod \"f4773a81-6741-4319-8bb6-e4ec0badc52b\" (UID: \"f4773a81-6741-4319-8bb6-e4ec0badc52b\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.768359 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wbsg\" (UniqueName: \"kubernetes.io/projected/1867cabd-41a8-413a-9d37-943e5661a535-kube-api-access-6wbsg\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.770159 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f4773a81-6741-4319-8bb6-e4ec0badc52b" (UID: "f4773a81-6741-4319-8bb6-e4ec0badc52b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.783908 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp" (OuterVolumeSpecName: "kube-api-access-g25kp") pod "f4773a81-6741-4319-8bb6-e4ec0badc52b" (UID: "f4773a81-6741-4319-8bb6-e4ec0badc52b"). InnerVolumeSpecName "kube-api-access-g25kp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.799204 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.869726 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config\") pod \"c130b339-04cf-40b3-bb1b-5354c12cece1\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.870261 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb\") pod \"c130b339-04cf-40b3-bb1b-5354c12cece1\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.870330 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc\") pod \"c130b339-04cf-40b3-bb1b-5354c12cece1\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.870414 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb\") pod \"c130b339-04cf-40b3-bb1b-5354c12cece1\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.870474 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb2l8\" (UniqueName: \"kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8\") pod \"c130b339-04cf-40b3-bb1b-5354c12cece1\" (UID: \"c130b339-04cf-40b3-bb1b-5354c12cece1\") " Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.872910 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f4773a81-6741-4319-8bb6-e4ec0badc52b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.872940 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g25kp\" (UniqueName: \"kubernetes.io/projected/f4773a81-6741-4319-8bb6-e4ec0badc52b-kube-api-access-g25kp\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.880572 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8" (OuterVolumeSpecName: "kube-api-access-pb2l8") pod "c130b339-04cf-40b3-bb1b-5354c12cece1" (UID: "c130b339-04cf-40b3-bb1b-5354c12cece1"). InnerVolumeSpecName "kube-api-access-pb2l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.927488 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c130b339-04cf-40b3-bb1b-5354c12cece1" (UID: "c130b339-04cf-40b3-bb1b-5354c12cece1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.930806 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c130b339-04cf-40b3-bb1b-5354c12cece1" (UID: "c130b339-04cf-40b3-bb1b-5354c12cece1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.939908 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c130b339-04cf-40b3-bb1b-5354c12cece1" (UID: "c130b339-04cf-40b3-bb1b-5354c12cece1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.942007 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config" (OuterVolumeSpecName: "config") pod "c130b339-04cf-40b3-bb1b-5354c12cece1" (UID: "c130b339-04cf-40b3-bb1b-5354c12cece1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.974842 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.974885 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.974896 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.974910 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c130b339-04cf-40b3-bb1b-5354c12cece1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:53 crc kubenswrapper[4745]: I0121 10:55:53.974919 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb2l8\" (UniqueName: \"kubernetes.io/projected/c130b339-04cf-40b3-bb1b-5354c12cece1-kube-api-access-pb2l8\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.115229 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d34-account-create-update-lmqpb" event={"ID":"7effc287-b786-4ed3-84a8-e7bc8ec693cb","Type":"ContainerStarted","Data":"711bbdd2d05dad49ec790ff4d3fd607e1856e777d302cc3034241f882e70678e"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.118419 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" event={"ID":"c130b339-04cf-40b3-bb1b-5354c12cece1","Type":"ContainerDied","Data":"3d7c442dc357e13e09415f2199f9e03911ddd81ed24244eca2b01460fa3cbf3a"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.118480 4745 scope.go:117] "RemoveContainer" containerID="6dd2eaa091ce28e37fe722ba56b86b3fca7077a7bd70a6442973a08aae8bf7db" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.118500 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.123826 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-6bqnp" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.123995 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-6bqnp" event={"ID":"f4773a81-6741-4319-8bb6-e4ec0badc52b","Type":"ContainerDied","Data":"90c74a9403c98ef9b55564ec9155922d8aa5bfe795d22443e7512df0e725bd98"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.124035 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90c74a9403c98ef9b55564ec9155922d8aa5bfe795d22443e7512df0e725bd98" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.127872 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-t9kdw" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.127891 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-t9kdw" event={"ID":"1867cabd-41a8-413a-9d37-943e5661a535","Type":"ContainerDied","Data":"59d78e0014061158c062489d269c65a751819fcecf85ff592407c94ec19702bd"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.127985 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59d78e0014061158c062489d269c65a751819fcecf85ff592407c94ec19702bd" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.132544 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b85-account-create-update-m8v62" event={"ID":"7c0203a2-37c2-4036-803d-3f2e86396cda","Type":"ContainerStarted","Data":"d231588081d70d78bdef8cf7971e0cfbbea911d69c3c518a372f704023c60aca"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.134446 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zdp4x" event={"ID":"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213","Type":"ContainerStarted","Data":"971e36cd291259737fee73bdb1c824e8079b55bc433c2a5d7ebd7114629cfb5d"} Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.145673 4745 scope.go:117] "RemoveContainer" containerID="5aafb6c6b33705ac7babfa4394b3117ee4ef3eb7f7affd7ec44d6385a6f8d365" Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.158314 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:54 crc kubenswrapper[4745]: I0121 10:55:54.167661 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-8jqsw"] Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.013665 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" path="/var/lib/kubelet/pods/c130b339-04cf-40b3-bb1b-5354c12cece1/volumes" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.149942 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zdp4x" event={"ID":"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213","Type":"ContainerStarted","Data":"ba0b931b3f33510b964ebce883ebe4922952aa41cb2ff7cf35aadf162cbe2700"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.151657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4083-account-create-update-lc4t4" event={"ID":"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f","Type":"ContainerStarted","Data":"f08b6a5431fc0245ddefdda89248b758593c5c6049bb16ae0bf6e81d6e6c477c"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.152880 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b85-account-create-update-m8v62" event={"ID":"7c0203a2-37c2-4036-803d-3f2e86396cda","Type":"ContainerStarted","Data":"4794202b30214054696fc1d938aa058f1eca53c8a3be108c77d5ba8795f5a39f"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.153940 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcgvc" event={"ID":"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf","Type":"ContainerStarted","Data":"8cef54c7b8ff35361805071da5cd62ade53b699a6f71d02461aa4d9e16c41cf1"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.155892 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gqgp7" event={"ID":"47f91446-e767-4f28-b77a-e77a7b9cd842","Type":"ContainerStarted","Data":"4bbae28f3a00f7b265e438efebfd02f69ec429fdda0602acdf736ee8219492e7"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.157380 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d34-account-create-update-lmqpb" event={"ID":"7effc287-b786-4ed3-84a8-e7bc8ec693cb","Type":"ContainerStarted","Data":"d94c34b48ac1863fcefb6ad33a4c0ca20dd6cf7b254b6f1fa90519aa07551d78"} Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.216863 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-t9kdw"] Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.226892 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-t9kdw"] Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315246 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nbpk2"] Jan 21 10:55:56 crc kubenswrapper[4745]: E0121 10:55:56.315571 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4773a81-6741-4319-8bb6-e4ec0badc52b" containerName="mariadb-database-create" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315588 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4773a81-6741-4319-8bb6-e4ec0badc52b" containerName="mariadb-database-create" Jan 21 10:55:56 crc kubenswrapper[4745]: E0121 10:55:56.315618 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1867cabd-41a8-413a-9d37-943e5661a535" containerName="mariadb-account-create-update" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315624 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1867cabd-41a8-413a-9d37-943e5661a535" containerName="mariadb-account-create-update" Jan 21 10:55:56 crc kubenswrapper[4745]: E0121 10:55:56.315631 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="dnsmasq-dns" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315638 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="dnsmasq-dns" Jan 21 10:55:56 crc kubenswrapper[4745]: E0121 10:55:56.315649 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="init" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315655 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="init" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315820 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1867cabd-41a8-413a-9d37-943e5661a535" containerName="mariadb-account-create-update" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315833 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4773a81-6741-4319-8bb6-e4ec0badc52b" containerName="mariadb-database-create" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.315844 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="dnsmasq-dns" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.316376 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.318986 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.340877 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbpk2"] Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.416444 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtpkf\" (UniqueName: \"kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.416649 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.518306 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtpkf\" (UniqueName: \"kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.518749 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.520090 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.539146 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtpkf\" (UniqueName: \"kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf\") pod \"root-account-create-update-nbpk2\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:56 crc kubenswrapper[4745]: I0121 10:55:56.638352 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.087895 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nbpk2"] Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.171662 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c0203a2-37c2-4036-803d-3f2e86396cda" containerID="4794202b30214054696fc1d938aa058f1eca53c8a3be108c77d5ba8795f5a39f" exitCode=0 Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.171720 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b85-account-create-update-m8v62" event={"ID":"7c0203a2-37c2-4036-803d-3f2e86396cda","Type":"ContainerDied","Data":"4794202b30214054696fc1d938aa058f1eca53c8a3be108c77d5ba8795f5a39f"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.181271 4745 generic.go:334] "Generic (PLEG): container finished" podID="cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" containerID="8cef54c7b8ff35361805071da5cd62ade53b699a6f71d02461aa4d9e16c41cf1" exitCode=0 Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.181357 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcgvc" event={"ID":"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf","Type":"ContainerDied","Data":"8cef54c7b8ff35361805071da5cd62ade53b699a6f71d02461aa4d9e16c41cf1"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.191877 4745 generic.go:334] "Generic (PLEG): container finished" podID="88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" containerID="ba0b931b3f33510b964ebce883ebe4922952aa41cb2ff7cf35aadf162cbe2700" exitCode=0 Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.191955 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zdp4x" event={"ID":"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213","Type":"ContainerDied","Data":"ba0b931b3f33510b964ebce883ebe4922952aa41cb2ff7cf35aadf162cbe2700"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.198322 4745 generic.go:334] "Generic (PLEG): container finished" podID="7effc287-b786-4ed3-84a8-e7bc8ec693cb" containerID="d94c34b48ac1863fcefb6ad33a4c0ca20dd6cf7b254b6f1fa90519aa07551d78" exitCode=0 Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.198816 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d34-account-create-update-lmqpb" event={"ID":"7effc287-b786-4ed3-84a8-e7bc8ec693cb","Type":"ContainerDied","Data":"d94c34b48ac1863fcefb6ad33a4c0ca20dd6cf7b254b6f1fa90519aa07551d78"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.200779 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbpk2" event={"ID":"4797d15e-02f2-467d-bc45-735158652b7f","Type":"ContainerStarted","Data":"36dcf17e7de052dff05b1ed15cd940f9f0bb04ab129d8bdee0115dafdcfbed2e"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.204748 4745 generic.go:334] "Generic (PLEG): container finished" podID="258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" containerID="f08b6a5431fc0245ddefdda89248b758593c5c6049bb16ae0bf6e81d6e6c477c" exitCode=0 Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.204903 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4083-account-create-update-lc4t4" event={"ID":"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f","Type":"ContainerDied","Data":"f08b6a5431fc0245ddefdda89248b758593c5c6049bb16ae0bf6e81d6e6c477c"} Jan 21 10:55:57 crc kubenswrapper[4745]: I0121 10:55:57.304824 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-gqgp7" podStartSLOduration=5.670792661 podStartE2EDuration="11.304795608s" podCreationTimestamp="2026-01-21 10:55:46 +0000 UTC" firstStartedPulling="2026-01-21 10:55:47.780670175 +0000 UTC m=+1132.241457773" lastFinishedPulling="2026-01-21 10:55:53.414673122 +0000 UTC m=+1137.875460720" observedRunningTime="2026-01-21 10:55:57.294640199 +0000 UTC m=+1141.755427797" watchObservedRunningTime="2026-01-21 10:55:57.304795608 +0000 UTC m=+1141.765583226" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.021181 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1867cabd-41a8-413a-9d37-943e5661a535" path="/var/lib/kubelet/pods/1867cabd-41a8-413a-9d37-943e5661a535/volumes" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.218415 4745 generic.go:334] "Generic (PLEG): container finished" podID="4797d15e-02f2-467d-bc45-735158652b7f" containerID="2406ae50264187dce315a4b62fadd851442d2a86b880bd3994da31a4c582aaf0" exitCode=0 Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.218583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbpk2" event={"ID":"4797d15e-02f2-467d-bc45-735158652b7f","Type":"ContainerDied","Data":"2406ae50264187dce315a4b62fadd851442d2a86b880bd3994da31a4c582aaf0"} Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.551558 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-8jqsw" podUID="c130b339-04cf-40b3-bb1b-5354c12cece1" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: i/o timeout" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.566657 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.652921 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hcmx\" (UniqueName: \"kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx\") pod \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.653265 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts\") pod \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\" (UID: \"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.655513 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" (UID: "258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.669679 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx" (OuterVolumeSpecName: "kube-api-access-8hcmx") pod "258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" (UID: "258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f"). InnerVolumeSpecName "kube-api-access-8hcmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.767555 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hcmx\" (UniqueName: \"kubernetes.io/projected/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-kube-api-access-8hcmx\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.767598 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.831354 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.837211 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.882084 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhgs5\" (UniqueName: \"kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5\") pod \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.883413 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts\") pod \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\" (UID: \"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.883507 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts\") pod \"7c0203a2-37c2-4036-803d-3f2e86396cda\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.883613 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4mkx\" (UniqueName: \"kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx\") pod \"7c0203a2-37c2-4036-803d-3f2e86396cda\" (UID: \"7c0203a2-37c2-4036-803d-3f2e86396cda\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.884834 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" (UID: "88d1294c-ad74-4bbf-ab56-cfc7f9c9c213"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.885884 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5" (OuterVolumeSpecName: "kube-api-access-mhgs5") pod "88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" (UID: "88d1294c-ad74-4bbf-ab56-cfc7f9c9c213"). InnerVolumeSpecName "kube-api-access-mhgs5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.887312 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.887605 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7c0203a2-37c2-4036-803d-3f2e86396cda" (UID: "7c0203a2-37c2-4036-803d-3f2e86396cda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.887776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx" (OuterVolumeSpecName: "kube-api-access-w4mkx") pod "7c0203a2-37c2-4036-803d-3f2e86396cda" (UID: "7c0203a2-37c2-4036-803d-3f2e86396cda"). InnerVolumeSpecName "kube-api-access-w4mkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.905457 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.984713 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts\") pod \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.984890 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmktx\" (UniqueName: \"kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx\") pod \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\" (UID: \"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.984955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts\") pod \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985019 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p65cz\" (UniqueName: \"kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz\") pod \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\" (UID: \"7effc287-b786-4ed3-84a8-e7bc8ec693cb\") " Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985379 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985590 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhgs5\" (UniqueName: \"kubernetes.io/projected/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-kube-api-access-mhgs5\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985614 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985673 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7c0203a2-37c2-4036-803d-3f2e86396cda-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985686 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4mkx\" (UniqueName: \"kubernetes.io/projected/7c0203a2-37c2-4036-803d-3f2e86396cda-kube-api-access-w4mkx\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985769 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7effc287-b786-4ed3-84a8-e7bc8ec693cb" (UID: "7effc287-b786-4ed3-84a8-e7bc8ec693cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: E0121 10:55:58.985789 4745 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 21 10:55:58 crc kubenswrapper[4745]: E0121 10:55:58.985845 4745 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 21 10:55:58 crc kubenswrapper[4745]: E0121 10:55:58.985896 4745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift podName:e3c32d66-7e7d-40dc-8726-2084e85452af nodeName:}" failed. No retries permitted until 2026-01-21 10:56:14.985878749 +0000 UTC m=+1159.446666347 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift") pod "swift-storage-0" (UID: "e3c32d66-7e7d-40dc-8726-2084e85452af") : configmap "swift-ring-files" not found Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.985922 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" (UID: "cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.988846 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx" (OuterVolumeSpecName: "kube-api-access-nmktx") pod "cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" (UID: "cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf"). InnerVolumeSpecName "kube-api-access-nmktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:58 crc kubenswrapper[4745]: I0121 10:55:58.988931 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz" (OuterVolumeSpecName: "kube-api-access-p65cz") pod "7effc287-b786-4ed3-84a8-e7bc8ec693cb" (UID: "7effc287-b786-4ed3-84a8-e7bc8ec693cb"). InnerVolumeSpecName "kube-api-access-p65cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.086882 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p65cz\" (UniqueName: \"kubernetes.io/projected/7effc287-b786-4ed3-84a8-e7bc8ec693cb-kube-api-access-p65cz\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.086908 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.086920 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmktx\" (UniqueName: \"kubernetes.io/projected/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf-kube-api-access-nmktx\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.086932 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7effc287-b786-4ed3-84a8-e7bc8ec693cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.229958 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-zdp4x" event={"ID":"88d1294c-ad74-4bbf-ab56-cfc7f9c9c213","Type":"ContainerDied","Data":"971e36cd291259737fee73bdb1c824e8079b55bc433c2a5d7ebd7114629cfb5d"} Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.230014 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="971e36cd291259737fee73bdb1c824e8079b55bc433c2a5d7ebd7114629cfb5d" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.231216 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-0d34-account-create-update-lmqpb" event={"ID":"7effc287-b786-4ed3-84a8-e7bc8ec693cb","Type":"ContainerDied","Data":"711bbdd2d05dad49ec790ff4d3fd607e1856e777d302cc3034241f882e70678e"} Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.231249 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="711bbdd2d05dad49ec790ff4d3fd607e1856e777d302cc3034241f882e70678e" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.231274 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-0d34-account-create-update-lmqpb" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.232054 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-zdp4x" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.256417 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-4083-account-create-update-lc4t4" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.256457 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-4083-account-create-update-lc4t4" event={"ID":"258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f","Type":"ContainerDied","Data":"ff6c76fee709e03e539bc8b694a1112266cbc81f371b855b094f791a88de731b"} Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.256485 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff6c76fee709e03e539bc8b694a1112266cbc81f371b855b094f791a88de731b" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.260881 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-4b85-account-create-update-m8v62" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.260881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-4b85-account-create-update-m8v62" event={"ID":"7c0203a2-37c2-4036-803d-3f2e86396cda","Type":"ContainerDied","Data":"d231588081d70d78bdef8cf7971e0cfbbea911d69c3c518a372f704023c60aca"} Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.261009 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d231588081d70d78bdef8cf7971e0cfbbea911d69c3c518a372f704023c60aca" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.276452 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-pcgvc" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.276630 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-pcgvc" event={"ID":"cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf","Type":"ContainerDied","Data":"8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2"} Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.276667 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c0d94dbfb4f20a1f82cf5e6d422137452097a21ed201393534565d00d95d7a2" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.576249 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbpk2" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.707591 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wtpkf\" (UniqueName: \"kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf\") pod \"4797d15e-02f2-467d-bc45-735158652b7f\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.707703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts\") pod \"4797d15e-02f2-467d-bc45-735158652b7f\" (UID: \"4797d15e-02f2-467d-bc45-735158652b7f\") " Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.708721 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4797d15e-02f2-467d-bc45-735158652b7f" (UID: "4797d15e-02f2-467d-bc45-735158652b7f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.711596 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf" (OuterVolumeSpecName: "kube-api-access-wtpkf") pod "4797d15e-02f2-467d-bc45-735158652b7f" (UID: "4797d15e-02f2-467d-bc45-735158652b7f"). InnerVolumeSpecName "kube-api-access-wtpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.809906 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wtpkf\" (UniqueName: \"kubernetes.io/projected/4797d15e-02f2-467d-bc45-735158652b7f-kube-api-access-wtpkf\") on node \"crc\" DevicePath \"\"" Jan 21 10:55:59 crc kubenswrapper[4745]: I0121 10:55:59.809945 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4797d15e-02f2-467d-bc45-735158652b7f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:00 crc kubenswrapper[4745]: I0121 10:56:00.285610 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nbpk2" event={"ID":"4797d15e-02f2-467d-bc45-735158652b7f","Type":"ContainerDied","Data":"36dcf17e7de052dff05b1ed15cd940f9f0bb04ab129d8bdee0115dafdcfbed2e"} Jan 21 10:56:00 crc kubenswrapper[4745]: I0121 10:56:00.285651 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36dcf17e7de052dff05b1ed15cd940f9f0bb04ab129d8bdee0115dafdcfbed2e" Jan 21 10:56:00 crc kubenswrapper[4745]: I0121 10:56:00.285752 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nbpk2" Jan 21 10:56:02 crc kubenswrapper[4745]: I0121 10:56:02.735782 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nbpk2"] Jan 21 10:56:02 crc kubenswrapper[4745]: I0121 10:56:02.751663 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nbpk2"] Jan 21 10:56:03 crc kubenswrapper[4745]: I0121 10:56:03.307963 4745 generic.go:334] "Generic (PLEG): container finished" podID="47f91446-e767-4f28-b77a-e77a7b9cd842" containerID="4bbae28f3a00f7b265e438efebfd02f69ec429fdda0602acdf736ee8219492e7" exitCode=0 Jan 21 10:56:03 crc kubenswrapper[4745]: I0121 10:56:03.308003 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gqgp7" event={"ID":"47f91446-e767-4f28-b77a-e77a7b9cd842","Type":"ContainerDied","Data":"4bbae28f3a00f7b265e438efebfd02f69ec429fdda0602acdf736ee8219492e7"} Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.016566 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4797d15e-02f2-467d-bc45-735158652b7f" path="/var/lib/kubelet/pods/4797d15e-02f2-467d-bc45-735158652b7f/volumes" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.499382 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.535587 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-xs6fp" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.677207 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.688981 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-cr2xq"] Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689408 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689429 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689449 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689457 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689472 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7effc287-b786-4ed3-84a8-e7bc8ec693cb" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689479 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7effc287-b786-4ed3-84a8-e7bc8ec693cb" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689490 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4797d15e-02f2-467d-bc45-735158652b7f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689498 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4797d15e-02f2-467d-bc45-735158652b7f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689509 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47f91446-e767-4f28-b77a-e77a7b9cd842" containerName="swift-ring-rebalance" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689517 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="47f91446-e767-4f28-b77a-e77a7b9cd842" containerName="swift-ring-rebalance" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689543 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689550 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: E0121 10:56:04.689587 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c0203a2-37c2-4036-803d-3f2e86396cda" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689596 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c0203a2-37c2-4036-803d-3f2e86396cda" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689782 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689799 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689809 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="47f91446-e767-4f28-b77a-e77a7b9cd842" containerName="swift-ring-rebalance" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689826 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" containerName="mariadb-database-create" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689836 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0203a2-37c2-4036-803d-3f2e86396cda" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689848 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7effc287-b786-4ed3-84a8-e7bc8ec693cb" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.689857 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4797d15e-02f2-467d-bc45-735158652b7f" containerName="mariadb-account-create-update" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.690516 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.693996 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.695333 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6gq8x" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.709876 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cr2xq"] Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.783612 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-t8gd4-config-rtqmk"] Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.784805 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.789211 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.797620 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-t8gd4-config-rtqmk"] Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806383 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806607 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806668 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806701 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806740 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806849 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxxnf\" (UniqueName: \"kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.806877 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf\") pod \"47f91446-e767-4f28-b77a-e77a7b9cd842\" (UID: \"47f91446-e767-4f28-b77a-e77a7b9cd842\") " Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.807217 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.807254 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xlzc\" (UniqueName: \"kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.807298 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.807328 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.807835 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.810829 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.821935 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf" (OuterVolumeSpecName: "kube-api-access-cxxnf") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "kube-api-access-cxxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.846228 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.866640 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.869028 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.871749 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts" (OuterVolumeSpecName: "scripts") pod "47f91446-e767-4f28-b77a-e77a7b9cd842" (UID: "47f91446-e767-4f28-b77a-e77a7b9cd842"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909722 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z248v\" (UniqueName: \"kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909854 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909891 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xlzc\" (UniqueName: \"kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909925 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909968 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.909994 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910029 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910089 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910131 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910213 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910665 4745 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910697 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910719 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47f91446-e767-4f28-b77a-e77a7b9cd842-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910735 4745 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910754 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxxnf\" (UniqueName: \"kubernetes.io/projected/47f91446-e767-4f28-b77a-e77a7b9cd842-kube-api-access-cxxnf\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910840 4745 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47f91446-e767-4f28-b77a-e77a7b9cd842-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.910882 4745 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47f91446-e767-4f28-b77a-e77a7b9cd842-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.914354 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.916571 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.921122 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:04 crc kubenswrapper[4745]: I0121 10:56:04.929513 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xlzc\" (UniqueName: \"kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc\") pod \"glance-db-sync-cr2xq\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.006036 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.012390 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.012434 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.012501 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013110 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013727 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z248v\" (UniqueName: \"kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013781 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013812 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013729 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.013943 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.014071 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.015126 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.052212 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z248v\" (UniqueName: \"kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v\") pod \"ovn-controller-t8gd4-config-rtqmk\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.117003 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.363867 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-gqgp7" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.367792 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-gqgp7" event={"ID":"47f91446-e767-4f28-b77a-e77a7b9cd842","Type":"ContainerDied","Data":"8e2899001c5c06d8308185b5b6c251f924d1161bc61d1ca6f8a73bc72c5967f5"} Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.367856 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2899001c5c06d8308185b5b6c251f924d1161bc61d1ca6f8a73bc72c5967f5" Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.431855 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-t8gd4-config-rtqmk"] Jan 21 10:56:05 crc kubenswrapper[4745]: W0121 10:56:05.432458 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbc5298_aae5_4e51_9506_6eb1d1f3fc1e.slice/crio-ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3 WatchSource:0}: Error finding container ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3: Status 404 returned error can't find the container with id ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3 Jan 21 10:56:05 crc kubenswrapper[4745]: I0121 10:56:05.640272 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-cr2xq"] Jan 21 10:56:05 crc kubenswrapper[4745]: W0121 10:56:05.659966 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod619fc0d2_35d7_4927_b904_5bf122e76d24.slice/crio-742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7 WatchSource:0}: Error finding container 742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7: Status 404 returned error can't find the container with id 742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7 Jan 21 10:56:06 crc kubenswrapper[4745]: I0121 10:56:06.369947 4745 generic.go:334] "Generic (PLEG): container finished" podID="efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" containerID="6bca8a3f9db747f7e36c5c7ed91e6af7408d39817c37286f426a1898f2be45c1" exitCode=0 Jan 21 10:56:06 crc kubenswrapper[4745]: I0121 10:56:06.370039 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-t8gd4-config-rtqmk" event={"ID":"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e","Type":"ContainerDied","Data":"6bca8a3f9db747f7e36c5c7ed91e6af7408d39817c37286f426a1898f2be45c1"} Jan 21 10:56:06 crc kubenswrapper[4745]: I0121 10:56:06.370071 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-t8gd4-config-rtqmk" event={"ID":"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e","Type":"ContainerStarted","Data":"ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3"} Jan 21 10:56:06 crc kubenswrapper[4745]: I0121 10:56:06.371564 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cr2xq" event={"ID":"619fc0d2-35d7-4927-b904-5bf122e76d24","Type":"ContainerStarted","Data":"742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7"} Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.743884 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gzsc2"] Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.745085 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.749146 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.756264 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gzsc2"] Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.789651 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906516 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906700 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906743 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906771 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z248v\" (UniqueName: \"kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906870 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906911 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn\") pod \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\" (UID: \"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e\") " Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.906938 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run" (OuterVolumeSpecName: "var-run") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907042 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907176 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907372 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm8hx\" (UniqueName: \"kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907408 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907771 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907899 4745 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907915 4745 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907925 4745 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.907938 4745 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-var-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.908063 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts" (OuterVolumeSpecName: "scripts") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:07 crc kubenswrapper[4745]: I0121 10:56:07.923697 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v" (OuterVolumeSpecName: "kube-api-access-z248v") pod "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" (UID: "efbc5298-aae5-4e51-9506-6eb1d1f3fc1e"). InnerVolumeSpecName "kube-api-access-z248v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.009429 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm8hx\" (UniqueName: \"kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.009567 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.009641 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.009655 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z248v\" (UniqueName: \"kubernetes.io/projected/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e-kube-api-access-z248v\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.028683 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.041111 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm8hx\" (UniqueName: \"kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx\") pod \"root-account-create-update-gzsc2\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.103035 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.416816 4745 generic.go:334] "Generic (PLEG): container finished" podID="557c4211-e324-49a4-8493-6685e4f5bee8" containerID="c6f7996113b4bddd9c946091c6d575b94b2e4d227cbd53bacf0332274d5d275c" exitCode=0 Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.417010 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerDied","Data":"c6f7996113b4bddd9c946091c6d575b94b2e4d227cbd53bacf0332274d5d275c"} Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.419271 4745 generic.go:334] "Generic (PLEG): container finished" podID="4af3b414-a820-42a8-89c4-f9cade535b01" containerID="d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1" exitCode=0 Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.419514 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerDied","Data":"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1"} Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.421070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-t8gd4-config-rtqmk" event={"ID":"efbc5298-aae5-4e51-9506-6eb1d1f3fc1e","Type":"ContainerDied","Data":"ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3"} Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.421098 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad89ad679451b2a11eb52f5e6d6d6db777da92549a34f36d38a8288bb1f0b5d3" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.421220 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-t8gd4-config-rtqmk" Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.554946 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gzsc2"] Jan 21 10:56:08 crc kubenswrapper[4745]: W0121 10:56:08.568166 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd78adcaa_487f_4b09_879f_a5c680fee573.slice/crio-eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561 WatchSource:0}: Error finding container eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561: Status 404 returned error can't find the container with id eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561 Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.916126 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-t8gd4-config-rtqmk"] Jan 21 10:56:08 crc kubenswrapper[4745]: I0121 10:56:08.922709 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-t8gd4-config-rtqmk"] Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.432236 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerStarted","Data":"1c6dbbcee43881f6df4956ed7f9529f8a880205583ac0c54cb141310e5486f4e"} Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.432556 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.436352 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerStarted","Data":"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672"} Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.436614 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.441268 4745 generic.go:334] "Generic (PLEG): container finished" podID="d78adcaa-487f-4b09-879f-a5c680fee573" containerID="c4bd5ca67543e5695924ea9805a43d6ffbc6e7ee22cd95b7b6558b9b4616c382" exitCode=0 Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.441389 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gzsc2" event={"ID":"d78adcaa-487f-4b09-879f-a5c680fee573","Type":"ContainerDied","Data":"c4bd5ca67543e5695924ea9805a43d6ffbc6e7ee22cd95b7b6558b9b4616c382"} Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.441465 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gzsc2" event={"ID":"d78adcaa-487f-4b09-879f-a5c680fee573","Type":"ContainerStarted","Data":"eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561"} Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.471308 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.627043071 podStartE2EDuration="1m24.471284362s" podCreationTimestamp="2026-01-21 10:54:45 +0000 UTC" firstStartedPulling="2026-01-21 10:54:48.152727288 +0000 UTC m=+1072.613514886" lastFinishedPulling="2026-01-21 10:55:33.996968579 +0000 UTC m=+1118.457756177" observedRunningTime="2026-01-21 10:56:09.456801673 +0000 UTC m=+1153.917589271" watchObservedRunningTime="2026-01-21 10:56:09.471284362 +0000 UTC m=+1153.932071960" Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.500310 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.711265518 podStartE2EDuration="1m24.50028868s" podCreationTimestamp="2026-01-21 10:54:45 +0000 UTC" firstStartedPulling="2026-01-21 10:54:47.275275691 +0000 UTC m=+1071.736063289" lastFinishedPulling="2026-01-21 10:55:34.064298853 +0000 UTC m=+1118.525086451" observedRunningTime="2026-01-21 10:56:09.487494735 +0000 UTC m=+1153.948282333" watchObservedRunningTime="2026-01-21 10:56:09.50028868 +0000 UTC m=+1153.961076278" Jan 21 10:56:09 crc kubenswrapper[4745]: I0121 10:56:09.515892 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-t8gd4" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.011524 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" path="/var/lib/kubelet/pods/efbc5298-aae5-4e51-9506-6eb1d1f3fc1e/volumes" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.830961 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.866750 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts\") pod \"d78adcaa-487f-4b09-879f-a5c680fee573\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.866811 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pm8hx\" (UniqueName: \"kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx\") pod \"d78adcaa-487f-4b09-879f-a5c680fee573\" (UID: \"d78adcaa-487f-4b09-879f-a5c680fee573\") " Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.867719 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d78adcaa-487f-4b09-879f-a5c680fee573" (UID: "d78adcaa-487f-4b09-879f-a5c680fee573"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.877917 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx" (OuterVolumeSpecName: "kube-api-access-pm8hx") pod "d78adcaa-487f-4b09-879f-a5c680fee573" (UID: "d78adcaa-487f-4b09-879f-a5c680fee573"). InnerVolumeSpecName "kube-api-access-pm8hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.968550 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d78adcaa-487f-4b09-879f-a5c680fee573-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:10 crc kubenswrapper[4745]: I0121 10:56:10.968587 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pm8hx\" (UniqueName: \"kubernetes.io/projected/d78adcaa-487f-4b09-879f-a5c680fee573-kube-api-access-pm8hx\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:11 crc kubenswrapper[4745]: I0121 10:56:11.459837 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gzsc2" event={"ID":"d78adcaa-487f-4b09-879f-a5c680fee573","Type":"ContainerDied","Data":"eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561"} Jan 21 10:56:11 crc kubenswrapper[4745]: I0121 10:56:11.459898 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eaa96adc25c8d37086c55a5ea0bbeacbfc2bef3e74d7cebbc927da59a2cc5561" Jan 21 10:56:11 crc kubenswrapper[4745]: I0121 10:56:11.459965 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gzsc2" Jan 21 10:56:14 crc kubenswrapper[4745]: I0121 10:56:14.107921 4745 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","poda9fae0a3-ae1c-4c51-8632-13424ad116f6"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort poda9fae0a3-ae1c-4c51-8632-13424ad116f6] : Timed out while waiting for systemd to remove kubepods-besteffort-poda9fae0a3_ae1c_4c51_8632_13424ad116f6.slice" Jan 21 10:56:14 crc kubenswrapper[4745]: E0121 10:56:14.109335 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort poda9fae0a3-ae1c-4c51-8632-13424ad116f6] : unable to destroy cgroup paths for cgroup [kubepods besteffort poda9fae0a3-ae1c-4c51-8632-13424ad116f6] : Timed out while waiting for systemd to remove kubepods-besteffort-poda9fae0a3_ae1c_4c51_8632_13424ad116f6.slice" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" Jan 21 10:56:14 crc kubenswrapper[4745]: I0121 10:56:14.483087 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-vnhmq" Jan 21 10:56:14 crc kubenswrapper[4745]: I0121 10:56:14.531471 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:56:14 crc kubenswrapper[4745]: I0121 10:56:14.543503 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-vnhmq"] Jan 21 10:56:15 crc kubenswrapper[4745]: I0121 10:56:15.058241 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:56:15 crc kubenswrapper[4745]: I0121 10:56:15.067026 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e3c32d66-7e7d-40dc-8726-2084e85452af-etc-swift\") pod \"swift-storage-0\" (UID: \"e3c32d66-7e7d-40dc-8726-2084e85452af\") " pod="openstack/swift-storage-0" Jan 21 10:56:15 crc kubenswrapper[4745]: I0121 10:56:15.225845 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 21 10:56:16 crc kubenswrapper[4745]: I0121 10:56:16.012112 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9fae0a3-ae1c-4c51-8632-13424ad116f6" path="/var/lib/kubelet/pods/a9fae0a3-ae1c-4c51-8632-13424ad116f6/volumes" Jan 21 10:56:24 crc kubenswrapper[4745]: E0121 10:56:24.549813 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Jan 21 10:56:24 crc kubenswrapper[4745]: E0121 10:56:24.551030 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xlzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-cr2xq_openstack(619fc0d2-35d7-4927-b904-5bf122e76d24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:56:24 crc kubenswrapper[4745]: E0121 10:56:24.553286 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-cr2xq" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" Jan 21 10:56:24 crc kubenswrapper[4745]: E0121 10:56:24.588218 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-cr2xq" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" Jan 21 10:56:25 crc kubenswrapper[4745]: I0121 10:56:25.322406 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 21 10:56:25 crc kubenswrapper[4745]: W0121 10:56:25.330574 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3c32d66_7e7d_40dc_8726_2084e85452af.slice/crio-10004d6c237fdfe9e357dd4399f595531496d44c31d26a252df15b780903f5b5 WatchSource:0}: Error finding container 10004d6c237fdfe9e357dd4399f595531496d44c31d26a252df15b780903f5b5: Status 404 returned error can't find the container with id 10004d6c237fdfe9e357dd4399f595531496d44c31d26a252df15b780903f5b5 Jan 21 10:56:25 crc kubenswrapper[4745]: I0121 10:56:25.594555 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"10004d6c237fdfe9e357dd4399f595531496d44c31d26a252df15b780903f5b5"} Jan 21 10:56:26 crc kubenswrapper[4745]: I0121 10:56:26.499313 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.97:5671: connect: connection refused" Jan 21 10:56:26 crc kubenswrapper[4745]: I0121 10:56:26.872843 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 21 10:56:29 crc kubenswrapper[4745]: I0121 10:56:29.629985 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"0531b5b5c0d301ad4d1dd7709c8e4a71015e88d262dabbe78c7440071b495a23"} Jan 21 10:56:29 crc kubenswrapper[4745]: I0121 10:56:29.630379 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"dd6b1fce55f211c2f4c53f1e4841cad8c67c434f856ff744834f0fcc2dbefd0e"} Jan 21 10:56:30 crc kubenswrapper[4745]: I0121 10:56:30.640784 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"a507b63507ae05d3e6d66d43c5317d3d7f7969019872cd6de5e092bb6efaff94"} Jan 21 10:56:30 crc kubenswrapper[4745]: I0121 10:56:30.641173 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"1ec4f52b5ec67dc642b3c10bcabb5688c5cb322fadab5a832bf9c5bd445e99e2"} Jan 21 10:56:32 crc kubenswrapper[4745]: I0121 10:56:32.668616 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"6fa2f656883b15fd5320557fb737934a3f268bb43849a8e5ef6c8bd3c0bb6b5d"} Jan 21 10:56:32 crc kubenswrapper[4745]: I0121 10:56:32.669193 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"d482ea9f99056e5c15f4e6f2d3f430bcedbaa6fde4716b195ada4f9ec590bb54"} Jan 21 10:56:32 crc kubenswrapper[4745]: I0121 10:56:32.669208 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"225cd24162ed6d9d529a158fa3d29adf335a54361f2f946fdd706c028f7e3579"} Jan 21 10:56:33 crc kubenswrapper[4745]: I0121 10:56:33.689822 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"b5699c901424d6b47ab2d63ae36743f2cdf5a1945afc18a9a5111613104d8d77"} Jan 21 10:56:34 crc kubenswrapper[4745]: I0121 10:56:34.703349 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"742900d54ab674b96aeea72303134a970d17050342a5005c7b7e97b97d0e45ce"} Jan 21 10:56:35 crc kubenswrapper[4745]: I0121 10:56:35.723813 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"143ee3e3f4ba454ac2fbfe7de59351ef5339a1ec8e1b296c8f0d9e821d6e55ba"} Jan 21 10:56:35 crc kubenswrapper[4745]: I0121 10:56:35.724549 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"a1be69e461f2f857731bc5baaba67d13935375e8005050bb4c3a0a5d0dd9cb8f"} Jan 21 10:56:35 crc kubenswrapper[4745]: I0121 10:56:35.724573 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"6541a14f1c401c9e25417715004bb634524298d32231a0fc97dbb137baa35bc8"} Jan 21 10:56:35 crc kubenswrapper[4745]: I0121 10:56:35.724587 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"8cdc9fd10b6d6f6ae2282cde0f964e8e9e170d4078d4be27e9b6df3ad09a0898"} Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.498897 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.737445 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cr2xq" event={"ID":"619fc0d2-35d7-4927-b904-5bf122e76d24","Type":"ContainerStarted","Data":"b49bf2369716e44450e48493ed12bfa8b7e4216a4ceb1de2bdf1dd6a7dd11320"} Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.762657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"9090d0db4b85f452e5d6d77f2022387c5103c0449a51b28e0970161b8fe12e28"} Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.762718 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e3c32d66-7e7d-40dc-8726-2084e85452af","Type":"ContainerStarted","Data":"33f0de26aaf7c6a110fa5f6011b63a510d13d29a97d15d40ade7638f236c663a"} Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.783763 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-cr2xq" podStartSLOduration=2.717233976 podStartE2EDuration="32.783743865s" podCreationTimestamp="2026-01-21 10:56:04 +0000 UTC" firstStartedPulling="2026-01-21 10:56:05.663188133 +0000 UTC m=+1150.123975721" lastFinishedPulling="2026-01-21 10:56:35.729698022 +0000 UTC m=+1180.190485610" observedRunningTime="2026-01-21 10:56:36.780197054 +0000 UTC m=+1181.240984652" watchObservedRunningTime="2026-01-21 10:56:36.783743865 +0000 UTC m=+1181.244531463" Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.826961 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=45.801119673 podStartE2EDuration="54.826928725s" podCreationTimestamp="2026-01-21 10:55:42 +0000 UTC" firstStartedPulling="2026-01-21 10:56:25.333556603 +0000 UTC m=+1169.794344221" lastFinishedPulling="2026-01-21 10:56:34.359365675 +0000 UTC m=+1178.820153273" observedRunningTime="2026-01-21 10:56:36.819512476 +0000 UTC m=+1181.280300084" watchObservedRunningTime="2026-01-21 10:56:36.826928725 +0000 UTC m=+1181.287716323" Jan 21 10:56:36 crc kubenswrapper[4745]: I0121 10:56:36.875056 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.271007 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-sm8n4"] Jan 21 10:56:37 crc kubenswrapper[4745]: E0121 10:56:37.271665 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" containerName="ovn-config" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.271682 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" containerName="ovn-config" Jan 21 10:56:37 crc kubenswrapper[4745]: E0121 10:56:37.271692 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d78adcaa-487f-4b09-879f-a5c680fee573" containerName="mariadb-account-create-update" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.271699 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d78adcaa-487f-4b09-879f-a5c680fee573" containerName="mariadb-account-create-update" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.271865 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d78adcaa-487f-4b09-879f-a5c680fee573" containerName="mariadb-account-create-update" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.271883 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="efbc5298-aae5-4e51-9506-6eb1d1f3fc1e" containerName="ovn-config" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.272390 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.304242 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-b6qwz"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.305207 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.312372 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sm8n4"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.345199 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b6qwz"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.372953 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.373022 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.373201 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rzq\" (UniqueName: \"kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.373336 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslnc\" (UniqueName: \"kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.475139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslnc\" (UniqueName: \"kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.475467 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.475523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.475923 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56rzq\" (UniqueName: \"kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.476210 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.476827 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.530086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslnc\" (UniqueName: \"kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc\") pod \"heat-db-create-b6qwz\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.548795 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56rzq\" (UniqueName: \"kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq\") pod \"cinder-db-create-sm8n4\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.586940 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.629947 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.647719 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.648950 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.663372 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.676448 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678742 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678805 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678869 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhjh\" (UniqueName: \"kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678937 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.678963 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781052 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781130 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781184 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781215 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781248 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.781274 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdhjh\" (UniqueName: \"kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.784310 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.784899 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.785471 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.786071 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.786732 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.825211 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdhjh\" (UniqueName: \"kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh\") pod \"dnsmasq-dns-764c5664d7-6chvk\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.901169 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-e410-account-create-update-sg8cc"] Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.902220 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.905444 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 21 10:56:37 crc kubenswrapper[4745]: I0121 10:56:37.966893 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e410-account-create-update-sg8cc"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.044950 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-9mgl2"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.046320 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.065074 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.088019 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9mgl2"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.094684 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.094785 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g9m4\" (UniqueName: \"kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.172124 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-hvq49"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.212652 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.232752 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.233405 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g9m4\" (UniqueName: \"kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.234313 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.234724 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58hcn\" (UniqueName: \"kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.239048 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hvq49"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.239086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.267788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g9m4\" (UniqueName: \"kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4\") pod \"cinder-e410-account-create-update-sg8cc\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.276517 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-b6qwz"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.281985 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.336230 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69fhz\" (UniqueName: \"kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.336354 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.336395 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58hcn\" (UniqueName: \"kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.336435 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.337184 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.357728 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58hcn\" (UniqueName: \"kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn\") pod \"barbican-db-create-9mgl2\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.407715 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.438184 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.438234 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69fhz\" (UniqueName: \"kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.439201 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.444081 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-ccz6s"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.445126 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.464827 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ccz6s"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.479163 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.480400 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cd91-account-create-update-jlzv9"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.484688 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.485265 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2rgkp" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.485493 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.486509 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.490574 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69fhz\" (UniqueName: \"kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz\") pod \"neutron-db-create-hvq49\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.501915 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.532605 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cd91-account-create-update-jlzv9"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.545798 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkj9\" (UniqueName: \"kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.545847 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.545906 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.545961 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6dz\" (UniqueName: \"kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.545998 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.556626 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.648638 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4x6dz\" (UniqueName: \"kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.648915 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.648954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnkj9\" (UniqueName: \"kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.648977 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.649024 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.667446 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.703869 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.708635 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4x6dz\" (UniqueName: \"kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.713138 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data\") pod \"keystone-db-sync-ccz6s\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.719669 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.728264 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-278d-account-create-update-2rxsx"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.729350 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.731941 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.748714 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-278d-account-create-update-2rxsx"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.753217 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-sm8n4"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.754927 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnkj9\" (UniqueName: \"kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9\") pod \"heat-cd91-account-create-update-jlzv9\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.755293 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="c2b5df3e-a44d-42ff-96a4-2bfd32db45bf" containerName="galera" probeResult="failure" output="command timed out" Jan 21 10:56:38 crc kubenswrapper[4745]: W0121 10:56:38.769759 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1fc5fc9c_917c_42bb_b3b1_ca81cd63e6ae.slice/crio-bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250 WatchSource:0}: Error finding container bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250: Status 404 returned error can't find the container with id bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250 Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.816849 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.855914 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9w89\" (UniqueName: \"kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.856464 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.883939 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" event={"ID":"1113c34b-a9b5-4849-b1d8-b46b4e622841","Type":"ContainerStarted","Data":"8bf462d9c9f0ad5068f21ac6a6bad0ff1f620a553e34bcc38177ac79b74366bb"} Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.893251 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sm8n4" event={"ID":"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae","Type":"ContainerStarted","Data":"bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250"} Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.908905 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2071-account-create-update-c226s"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.909854 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.914376 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.931346 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b6qwz" event={"ID":"269d4758-9e42-46a9-9e75-b2fee912d2fd","Type":"ContainerStarted","Data":"4d51c5d802b5b3cc0c87e59f87a2ddeae6d6140727eb96bdb0144cbbd7daf615"} Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.931644 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.950354 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2071-account-create-update-c226s"] Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.971562 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.971642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9w89\" (UniqueName: \"kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.971683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdc5d\" (UniqueName: \"kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.971727 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:38 crc kubenswrapper[4745]: I0121 10:56:38.974726 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.006850 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9w89\" (UniqueName: \"kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89\") pod \"neutron-278d-account-create-update-2rxsx\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.068643 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-e410-account-create-update-sg8cc"] Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.078063 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.078178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdc5d\" (UniqueName: \"kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.080011 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:39 crc kubenswrapper[4745]: W0121 10:56:39.137708 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod076c482a_90e9_4db9_aa66_85e7d6a1ad3b.slice/crio-33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320 WatchSource:0}: Error finding container 33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320: Status 404 returned error can't find the container with id 33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320 Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.185851 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdc5d\" (UniqueName: \"kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d\") pod \"barbican-2071-account-create-update-c226s\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.209550 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.245132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.378427 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-9mgl2"] Jan 21 10:56:39 crc kubenswrapper[4745]: W0121 10:56:39.381006 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09f9ff49_53aa_4ecb_8e5c_b4fd1c13c78d.slice/crio-81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30 WatchSource:0}: Error finding container 81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30: Status 404 returned error can't find the container with id 81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30 Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.653789 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-hvq49"] Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.929638 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-ccz6s"] Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.943668 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cd91-account-create-update-jlzv9"] Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.979783 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b6qwz" event={"ID":"269d4758-9e42-46a9-9e75-b2fee912d2fd","Type":"ContainerStarted","Data":"1149680219c7a9e31f0d009865027fb51efb9d7992c985160ecc8b071b8fc5e6"} Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.991905 4745 generic.go:334] "Generic (PLEG): container finished" podID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerID="8f4376d0fb335ce955811966592aac2162de65de2eb5b47d2bd8d7baeeef058d" exitCode=0 Jan 21 10:56:39 crc kubenswrapper[4745]: I0121 10:56:39.991975 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" event={"ID":"1113c34b-a9b5-4849-b1d8-b46b4e622841","Type":"ContainerDied","Data":"8f4376d0fb335ce955811966592aac2162de65de2eb5b47d2bd8d7baeeef058d"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.120299 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-b6qwz" podStartSLOduration=3.120276523 podStartE2EDuration="3.120276523s" podCreationTimestamp="2026-01-21 10:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:40.025664285 +0000 UTC m=+1184.486451883" watchObservedRunningTime="2026-01-21 10:56:40.120276523 +0000 UTC m=+1184.581064121" Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.154109 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvq49" event={"ID":"5c2e56ea-b70a-4562-87ae-9811198d1c96","Type":"ContainerStarted","Data":"742ccd887c97e56cf47f567c9b555f4219932a124d154d1f480cff65f89f20f4"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.154522 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e410-account-create-update-sg8cc" event={"ID":"076c482a-90e9-4db9-aa66-85e7d6a1ad3b","Type":"ContainerStarted","Data":"17d0dbc23e1967da164f116764ef5cf86553358448a1853862df54ca7a33e7ae"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.158805 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e410-account-create-update-sg8cc" event={"ID":"076c482a-90e9-4db9-aa66-85e7d6a1ad3b","Type":"ContainerStarted","Data":"33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.158880 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sm8n4" event={"ID":"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae","Type":"ContainerStarted","Data":"b9bc99531dd008a5d455342d10983c39fc8446c55dc19e73dafed8b83d8f9b75"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.158942 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9mgl2" event={"ID":"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d","Type":"ContainerStarted","Data":"94aa73677b12dba86da7e8cd092f041cf7411430e64c50535b092071d637c803"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.159000 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9mgl2" event={"ID":"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d","Type":"ContainerStarted","Data":"81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30"} Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.169226 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-e410-account-create-update-sg8cc" podStartSLOduration=3.16920524 podStartE2EDuration="3.16920524s" podCreationTimestamp="2026-01-21 10:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:40.112329752 +0000 UTC m=+1184.573117350" watchObservedRunningTime="2026-01-21 10:56:40.16920524 +0000 UTC m=+1184.629992838" Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.176964 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-sm8n4" podStartSLOduration=3.176945997 podStartE2EDuration="3.176945997s" podCreationTimestamp="2026-01-21 10:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:40.14134234 +0000 UTC m=+1184.602129948" watchObservedRunningTime="2026-01-21 10:56:40.176945997 +0000 UTC m=+1184.637733595" Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.213898 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-9mgl2" podStartSLOduration=2.213877938 podStartE2EDuration="2.213877938s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:40.174242678 +0000 UTC m=+1184.635030286" watchObservedRunningTime="2026-01-21 10:56:40.213877938 +0000 UTC m=+1184.674665536" Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.251761 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2071-account-create-update-c226s"] Jan 21 10:56:40 crc kubenswrapper[4745]: I0121 10:56:40.283186 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-278d-account-create-update-2rxsx"] Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.163940 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-278d-account-create-update-2rxsx" event={"ID":"edaf1847-278a-4826-a868-c5923e1ea872","Type":"ContainerStarted","Data":"5852c813b080230eaa54a31092b04998ea419d60cc3d066cc76cc02de66ef5ec"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.163982 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-278d-account-create-update-2rxsx" event={"ID":"edaf1847-278a-4826-a868-c5923e1ea872","Type":"ContainerStarted","Data":"03b97aad4d86e1d61cd5bb06fc3b344bf32b8762cee3820f5674a229558bb08e"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.166881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvq49" event={"ID":"5c2e56ea-b70a-4562-87ae-9811198d1c96","Type":"ContainerStarted","Data":"27b72fc04f017bc615dd59a7ac7c06ef300a814a91330b019a6288e7ca6c3a27"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.174934 4745 generic.go:334] "Generic (PLEG): container finished" podID="1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" containerID="b9bc99531dd008a5d455342d10983c39fc8446c55dc19e73dafed8b83d8f9b75" exitCode=0 Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.175008 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sm8n4" event={"ID":"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae","Type":"ContainerDied","Data":"b9bc99531dd008a5d455342d10983c39fc8446c55dc19e73dafed8b83d8f9b75"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.176633 4745 generic.go:334] "Generic (PLEG): container finished" podID="09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" containerID="94aa73677b12dba86da7e8cd092f041cf7411430e64c50535b092071d637c803" exitCode=0 Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.176666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9mgl2" event={"ID":"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d","Type":"ContainerDied","Data":"94aa73677b12dba86da7e8cd092f041cf7411430e64c50535b092071d637c803"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.180770 4745 generic.go:334] "Generic (PLEG): container finished" podID="269d4758-9e42-46a9-9e75-b2fee912d2fd" containerID="1149680219c7a9e31f0d009865027fb51efb9d7992c985160ecc8b071b8fc5e6" exitCode=0 Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.180846 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b6qwz" event={"ID":"269d4758-9e42-46a9-9e75-b2fee912d2fd","Type":"ContainerDied","Data":"1149680219c7a9e31f0d009865027fb51efb9d7992c985160ecc8b071b8fc5e6"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.185723 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ccz6s" event={"ID":"319bfda0-51fb-4790-95eb-f1eed417deff","Type":"ContainerStarted","Data":"fb7ba9761132ce3b59773070677050b7cd065bd7521b490e6e17e12784781dcc"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.197841 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-278d-account-create-update-2rxsx" podStartSLOduration=3.197825455 podStartE2EDuration="3.197825455s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:41.195777572 +0000 UTC m=+1185.656565170" watchObservedRunningTime="2026-01-21 10:56:41.197825455 +0000 UTC m=+1185.658613053" Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.204311 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" event={"ID":"1113c34b-a9b5-4849-b1d8-b46b4e622841","Type":"ContainerStarted","Data":"aa6e2bb609344a3ea8c7cca200f3e1920233bef634dc5d5ab4f406f0bfd6ba4d"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.205101 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.208481 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cd91-account-create-update-jlzv9" event={"ID":"e95ab45a-aa5c-48af-8e3d-1a8900427471","Type":"ContainerStarted","Data":"2b30ebf1f7a5f0cffab5b6c88ee980eef2ea8aa204c8f06b9a0cb911dce72d20"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.208516 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cd91-account-create-update-jlzv9" event={"ID":"e95ab45a-aa5c-48af-8e3d-1a8900427471","Type":"ContainerStarted","Data":"3123d06db981d2921a844de09a26e1b23590f53150bbba767330063688ffc34c"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.210664 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2071-account-create-update-c226s" event={"ID":"773f1b49-1207-44fe-ba15-ee0186030684","Type":"ContainerStarted","Data":"ff248a821e88d231f300bf25a3b8b77c3bead3d1458cbf6acc5c8dc443f44046"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.210698 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2071-account-create-update-c226s" event={"ID":"773f1b49-1207-44fe-ba15-ee0186030684","Type":"ContainerStarted","Data":"2f86c6b1db12f8d09073748219d9a8a0675e13895cda78baf707960d448f6e17"} Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.384124 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-hvq49" podStartSLOduration=3.384105959 podStartE2EDuration="3.384105959s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:41.344234283 +0000 UTC m=+1185.805021881" watchObservedRunningTime="2026-01-21 10:56:41.384105959 +0000 UTC m=+1185.844893557" Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.427446 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2071-account-create-update-c226s" podStartSLOduration=3.427424382 podStartE2EDuration="3.427424382s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:41.417336525 +0000 UTC m=+1185.878124123" watchObservedRunningTime="2026-01-21 10:56:41.427424382 +0000 UTC m=+1185.888211980" Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.450408 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podStartSLOduration=4.450388936 podStartE2EDuration="4.450388936s" podCreationTimestamp="2026-01-21 10:56:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:41.44111193 +0000 UTC m=+1185.901899528" watchObservedRunningTime="2026-01-21 10:56:41.450388936 +0000 UTC m=+1185.911176524" Jan 21 10:56:41 crc kubenswrapper[4745]: I0121 10:56:41.468046 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cd91-account-create-update-jlzv9" podStartSLOduration=3.468023776 podStartE2EDuration="3.468023776s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:41.45716912 +0000 UTC m=+1185.917956728" watchObservedRunningTime="2026-01-21 10:56:41.468023776 +0000 UTC m=+1185.928811374" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.219446 4745 generic.go:334] "Generic (PLEG): container finished" podID="5c2e56ea-b70a-4562-87ae-9811198d1c96" containerID="27b72fc04f017bc615dd59a7ac7c06ef300a814a91330b019a6288e7ca6c3a27" exitCode=0 Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.219573 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvq49" event={"ID":"5c2e56ea-b70a-4562-87ae-9811198d1c96","Type":"ContainerDied","Data":"27b72fc04f017bc615dd59a7ac7c06ef300a814a91330b019a6288e7ca6c3a27"} Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.225397 4745 generic.go:334] "Generic (PLEG): container finished" podID="076c482a-90e9-4db9-aa66-85e7d6a1ad3b" containerID="17d0dbc23e1967da164f116764ef5cf86553358448a1853862df54ca7a33e7ae" exitCode=0 Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.225628 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e410-account-create-update-sg8cc" event={"ID":"076c482a-90e9-4db9-aa66-85e7d6a1ad3b","Type":"ContainerDied","Data":"17d0dbc23e1967da164f116764ef5cf86553358448a1853862df54ca7a33e7ae"} Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.588691 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.707486 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gslnc\" (UniqueName: \"kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc\") pod \"269d4758-9e42-46a9-9e75-b2fee912d2fd\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.707647 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts\") pod \"269d4758-9e42-46a9-9e75-b2fee912d2fd\" (UID: \"269d4758-9e42-46a9-9e75-b2fee912d2fd\") " Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.709302 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "269d4758-9e42-46a9-9e75-b2fee912d2fd" (UID: "269d4758-9e42-46a9-9e75-b2fee912d2fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.713851 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc" (OuterVolumeSpecName: "kube-api-access-gslnc") pod "269d4758-9e42-46a9-9e75-b2fee912d2fd" (UID: "269d4758-9e42-46a9-9e75-b2fee912d2fd"). InnerVolumeSpecName "kube-api-access-gslnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.810828 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gslnc\" (UniqueName: \"kubernetes.io/projected/269d4758-9e42-46a9-9e75-b2fee912d2fd-kube-api-access-gslnc\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.810892 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/269d4758-9e42-46a9-9e75-b2fee912d2fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.887564 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:42 crc kubenswrapper[4745]: I0121 10:56:42.910454 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.015466 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56rzq\" (UniqueName: \"kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq\") pod \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.015718 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts\") pod \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.015787 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58hcn\" (UniqueName: \"kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn\") pod \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\" (UID: \"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d\") " Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.015858 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts\") pod \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\" (UID: \"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae\") " Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.016347 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" (UID: "09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.017371 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" (UID: "1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.017866 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.017917 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.020334 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq" (OuterVolumeSpecName: "kube-api-access-56rzq") pod "1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" (UID: "1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae"). InnerVolumeSpecName "kube-api-access-56rzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.021818 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn" (OuterVolumeSpecName: "kube-api-access-58hcn") pod "09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" (UID: "09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d"). InnerVolumeSpecName "kube-api-access-58hcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.119852 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56rzq\" (UniqueName: \"kubernetes.io/projected/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae-kube-api-access-56rzq\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.120578 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58hcn\" (UniqueName: \"kubernetes.io/projected/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d-kube-api-access-58hcn\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.236126 4745 generic.go:334] "Generic (PLEG): container finished" podID="e95ab45a-aa5c-48af-8e3d-1a8900427471" containerID="2b30ebf1f7a5f0cffab5b6c88ee980eef2ea8aa204c8f06b9a0cb911dce72d20" exitCode=0 Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.236194 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cd91-account-create-update-jlzv9" event={"ID":"e95ab45a-aa5c-48af-8e3d-1a8900427471","Type":"ContainerDied","Data":"2b30ebf1f7a5f0cffab5b6c88ee980eef2ea8aa204c8f06b9a0cb911dce72d20"} Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.238737 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-sm8n4" event={"ID":"1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae","Type":"ContainerDied","Data":"bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250"} Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.238763 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bae4c47ac04d43360526d08b66f3ddb0f282e16ff11cd39133fed4dfcd68a250" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.238777 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-sm8n4" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.240803 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-9mgl2" event={"ID":"09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d","Type":"ContainerDied","Data":"81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30"} Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.240858 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81eb559bba9586287e4caeb707649ec9317dfdd239f984081d30d78b0962ee30" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.240939 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-9mgl2" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.244956 4745 generic.go:334] "Generic (PLEG): container finished" podID="773f1b49-1207-44fe-ba15-ee0186030684" containerID="ff248a821e88d231f300bf25a3b8b77c3bead3d1458cbf6acc5c8dc443f44046" exitCode=0 Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.245098 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2071-account-create-update-c226s" event={"ID":"773f1b49-1207-44fe-ba15-ee0186030684","Type":"ContainerDied","Data":"ff248a821e88d231f300bf25a3b8b77c3bead3d1458cbf6acc5c8dc443f44046"} Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.246868 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-b6qwz" event={"ID":"269d4758-9e42-46a9-9e75-b2fee912d2fd","Type":"ContainerDied","Data":"4d51c5d802b5b3cc0c87e59f87a2ddeae6d6140727eb96bdb0144cbbd7daf615"} Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.246902 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d51c5d802b5b3cc0c87e59f87a2ddeae6d6140727eb96bdb0144cbbd7daf615" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.247052 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-b6qwz" Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.248487 4745 generic.go:334] "Generic (PLEG): container finished" podID="edaf1847-278a-4826-a868-c5923e1ea872" containerID="5852c813b080230eaa54a31092b04998ea419d60cc3d066cc76cc02de66ef5ec" exitCode=0 Jan 21 10:56:43 crc kubenswrapper[4745]: I0121 10:56:43.248560 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-278d-account-create-update-2rxsx" event={"ID":"edaf1847-278a-4826-a868-c5923e1ea872","Type":"ContainerDied","Data":"5852c813b080230eaa54a31092b04998ea419d60cc3d066cc76cc02de66ef5ec"} Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.722380 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.732772 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.743077 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793692 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdc5d\" (UniqueName: \"kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d\") pod \"773f1b49-1207-44fe-ba15-ee0186030684\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793760 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts\") pod \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793803 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69fhz\" (UniqueName: \"kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz\") pod \"5c2e56ea-b70a-4562-87ae-9811198d1c96\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793869 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g9m4\" (UniqueName: \"kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4\") pod \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\" (UID: \"076c482a-90e9-4db9-aa66-85e7d6a1ad3b\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793912 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts\") pod \"5c2e56ea-b70a-4562-87ae-9811198d1c96\" (UID: \"5c2e56ea-b70a-4562-87ae-9811198d1c96\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.793950 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts\") pod \"773f1b49-1207-44fe-ba15-ee0186030684\" (UID: \"773f1b49-1207-44fe-ba15-ee0186030684\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.795276 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "773f1b49-1207-44fe-ba15-ee0186030684" (UID: "773f1b49-1207-44fe-ba15-ee0186030684"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.800569 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d" (OuterVolumeSpecName: "kube-api-access-mdc5d") pod "773f1b49-1207-44fe-ba15-ee0186030684" (UID: "773f1b49-1207-44fe-ba15-ee0186030684"). InnerVolumeSpecName "kube-api-access-mdc5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.802204 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c2e56ea-b70a-4562-87ae-9811198d1c96" (UID: "5c2e56ea-b70a-4562-87ae-9811198d1c96"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.802368 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "076c482a-90e9-4db9-aa66-85e7d6a1ad3b" (UID: "076c482a-90e9-4db9-aa66-85e7d6a1ad3b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.803700 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz" (OuterVolumeSpecName: "kube-api-access-69fhz") pod "5c2e56ea-b70a-4562-87ae-9811198d1c96" (UID: "5c2e56ea-b70a-4562-87ae-9811198d1c96"). InnerVolumeSpecName "kube-api-access-69fhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.804340 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4" (OuterVolumeSpecName: "kube-api-access-5g9m4") pod "076c482a-90e9-4db9-aa66-85e7d6a1ad3b" (UID: "076c482a-90e9-4db9-aa66-85e7d6a1ad3b"). InnerVolumeSpecName "kube-api-access-5g9m4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.814026 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.859845 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.895502 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9w89\" (UniqueName: \"kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89\") pod \"edaf1847-278a-4826-a868-c5923e1ea872\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.895601 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts\") pod \"e95ab45a-aa5c-48af-8e3d-1a8900427471\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.895621 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnkj9\" (UniqueName: \"kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9\") pod \"e95ab45a-aa5c-48af-8e3d-1a8900427471\" (UID: \"e95ab45a-aa5c-48af-8e3d-1a8900427471\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.896661 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts\") pod \"edaf1847-278a-4826-a868-c5923e1ea872\" (UID: \"edaf1847-278a-4826-a868-c5923e1ea872\") " Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897099 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897118 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69fhz\" (UniqueName: \"kubernetes.io/projected/5c2e56ea-b70a-4562-87ae-9811198d1c96-kube-api-access-69fhz\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897130 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g9m4\" (UniqueName: \"kubernetes.io/projected/076c482a-90e9-4db9-aa66-85e7d6a1ad3b-kube-api-access-5g9m4\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897140 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c2e56ea-b70a-4562-87ae-9811198d1c96-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897128 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edaf1847-278a-4826-a868-c5923e1ea872" (UID: "edaf1847-278a-4826-a868-c5923e1ea872"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897149 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/773f1b49-1207-44fe-ba15-ee0186030684-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897243 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdc5d\" (UniqueName: \"kubernetes.io/projected/773f1b49-1207-44fe-ba15-ee0186030684-kube-api-access-mdc5d\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.897328 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e95ab45a-aa5c-48af-8e3d-1a8900427471" (UID: "e95ab45a-aa5c-48af-8e3d-1a8900427471"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.899478 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9" (OuterVolumeSpecName: "kube-api-access-hnkj9") pod "e95ab45a-aa5c-48af-8e3d-1a8900427471" (UID: "e95ab45a-aa5c-48af-8e3d-1a8900427471"). InnerVolumeSpecName "kube-api-access-hnkj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.900128 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89" (OuterVolumeSpecName: "kube-api-access-s9w89") pod "edaf1847-278a-4826-a868-c5923e1ea872" (UID: "edaf1847-278a-4826-a868-c5923e1ea872"). InnerVolumeSpecName "kube-api-access-s9w89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.998809 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e95ab45a-aa5c-48af-8e3d-1a8900427471-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.998852 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnkj9\" (UniqueName: \"kubernetes.io/projected/e95ab45a-aa5c-48af-8e3d-1a8900427471-kube-api-access-hnkj9\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.998862 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edaf1847-278a-4826-a868-c5923e1ea872-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:46 crc kubenswrapper[4745]: I0121 10:56:46.998872 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9w89\" (UniqueName: \"kubernetes.io/projected/edaf1847-278a-4826-a868-c5923e1ea872-kube-api-access-s9w89\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.284512 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2071-account-create-update-c226s" event={"ID":"773f1b49-1207-44fe-ba15-ee0186030684","Type":"ContainerDied","Data":"2f86c6b1db12f8d09073748219d9a8a0675e13895cda78baf707960d448f6e17"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.284579 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f86c6b1db12f8d09073748219d9a8a0675e13895cda78baf707960d448f6e17" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.284550 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2071-account-create-update-c226s" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.286011 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ccz6s" event={"ID":"319bfda0-51fb-4790-95eb-f1eed417deff","Type":"ContainerStarted","Data":"2dd9412639b60fb9a331be66f6006fe2b3cd7e5b5581fcaab636ab35b21078d6"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.288716 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-278d-account-create-update-2rxsx" event={"ID":"edaf1847-278a-4826-a868-c5923e1ea872","Type":"ContainerDied","Data":"03b97aad4d86e1d61cd5bb06fc3b344bf32b8762cee3820f5674a229558bb08e"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.288746 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b97aad4d86e1d61cd5bb06fc3b344bf32b8762cee3820f5674a229558bb08e" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.288813 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-278d-account-create-update-2rxsx" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.297299 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cd91-account-create-update-jlzv9" event={"ID":"e95ab45a-aa5c-48af-8e3d-1a8900427471","Type":"ContainerDied","Data":"3123d06db981d2921a844de09a26e1b23590f53150bbba767330063688ffc34c"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.297331 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3123d06db981d2921a844de09a26e1b23590f53150bbba767330063688ffc34c" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.297388 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cd91-account-create-update-jlzv9" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.311927 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-hvq49" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.311963 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-hvq49" event={"ID":"5c2e56ea-b70a-4562-87ae-9811198d1c96","Type":"ContainerDied","Data":"742ccd887c97e56cf47f567c9b555f4219932a124d154d1f480cff65f89f20f4"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.312022 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742ccd887c97e56cf47f567c9b555f4219932a124d154d1f480cff65f89f20f4" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.316845 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-e410-account-create-update-sg8cc" event={"ID":"076c482a-90e9-4db9-aa66-85e7d6a1ad3b","Type":"ContainerDied","Data":"33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320"} Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.316904 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33292c4c416e8dd50fc1f3f8c98fc6be657f215d073a5a29375237492547a320" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.316986 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-e410-account-create-update-sg8cc" Jan 21 10:56:47 crc kubenswrapper[4745]: I0121 10:56:47.317933 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-ccz6s" podStartSLOduration=2.778799733 podStartE2EDuration="9.317906039s" podCreationTimestamp="2026-01-21 10:56:38 +0000 UTC" firstStartedPulling="2026-01-21 10:56:39.956946424 +0000 UTC m=+1184.417734022" lastFinishedPulling="2026-01-21 10:56:46.49605272 +0000 UTC m=+1190.956840328" observedRunningTime="2026-01-21 10:56:47.31202503 +0000 UTC m=+1191.772812628" watchObservedRunningTime="2026-01-21 10:56:47.317906039 +0000 UTC m=+1191.778693647" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.067612 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.131484 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.132074 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nhwts" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="dnsmasq-dns" containerID="cri-o://654ada3e56d569575b8e13e6e0679120caa50187c4c8de7bd0af946f3a175d26" gracePeriod=10 Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.343034 4745 generic.go:334] "Generic (PLEG): container finished" podID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerID="654ada3e56d569575b8e13e6e0679120caa50187c4c8de7bd0af946f3a175d26" exitCode=0 Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.343129 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nhwts" event={"ID":"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51","Type":"ContainerDied","Data":"654ada3e56d569575b8e13e6e0679120caa50187c4c8de7bd0af946f3a175d26"} Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.558585 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.633324 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5477\" (UniqueName: \"kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477\") pod \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.633406 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb\") pod \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.633503 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config\") pod \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.633565 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc\") pod \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.633631 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb\") pod \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\" (UID: \"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51\") " Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.660799 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477" (OuterVolumeSpecName: "kube-api-access-x5477") pod "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" (UID: "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51"). InnerVolumeSpecName "kube-api-access-x5477". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.710428 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" (UID: "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.718085 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" (UID: "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.736037 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5477\" (UniqueName: \"kubernetes.io/projected/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-kube-api-access-x5477\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.736074 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.736083 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.748458 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" (UID: "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.757441 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config" (OuterVolumeSpecName: "config") pod "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" (UID: "0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.837687 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:48 crc kubenswrapper[4745]: I0121 10:56:48.837739 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.353157 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nhwts" event={"ID":"0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51","Type":"ContainerDied","Data":"36b47bc0a8eb5bb6bf0f68682fef86c4129f3eab61142b2405f84c9a7ea8e83f"} Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.353585 4745 scope.go:117] "RemoveContainer" containerID="654ada3e56d569575b8e13e6e0679120caa50187c4c8de7bd0af946f3a175d26" Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.353729 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nhwts" Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.384663 4745 scope.go:117] "RemoveContainer" containerID="3d884056ea34c96cc5c316f140aa10193d87244478f4631ce7958fbb2871e895" Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.407227 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:56:49 crc kubenswrapper[4745]: I0121 10:56:49.417480 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nhwts"] Jan 21 10:56:50 crc kubenswrapper[4745]: I0121 10:56:50.016337 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" path="/var/lib/kubelet/pods/0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51/volumes" Jan 21 10:56:51 crc kubenswrapper[4745]: I0121 10:56:51.388172 4745 generic.go:334] "Generic (PLEG): container finished" podID="319bfda0-51fb-4790-95eb-f1eed417deff" containerID="2dd9412639b60fb9a331be66f6006fe2b3cd7e5b5581fcaab636ab35b21078d6" exitCode=0 Jan 21 10:56:51 crc kubenswrapper[4745]: I0121 10:56:51.388554 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ccz6s" event={"ID":"319bfda0-51fb-4790-95eb-f1eed417deff","Type":"ContainerDied","Data":"2dd9412639b60fb9a331be66f6006fe2b3cd7e5b5581fcaab636ab35b21078d6"} Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.743322 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.824763 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data\") pod \"319bfda0-51fb-4790-95eb-f1eed417deff\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.824886 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4x6dz\" (UniqueName: \"kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz\") pod \"319bfda0-51fb-4790-95eb-f1eed417deff\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.824939 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle\") pod \"319bfda0-51fb-4790-95eb-f1eed417deff\" (UID: \"319bfda0-51fb-4790-95eb-f1eed417deff\") " Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.840900 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz" (OuterVolumeSpecName: "kube-api-access-4x6dz") pod "319bfda0-51fb-4790-95eb-f1eed417deff" (UID: "319bfda0-51fb-4790-95eb-f1eed417deff"). InnerVolumeSpecName "kube-api-access-4x6dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.861322 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "319bfda0-51fb-4790-95eb-f1eed417deff" (UID: "319bfda0-51fb-4790-95eb-f1eed417deff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.874599 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data" (OuterVolumeSpecName: "config-data") pod "319bfda0-51fb-4790-95eb-f1eed417deff" (UID: "319bfda0-51fb-4790-95eb-f1eed417deff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.926932 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4x6dz\" (UniqueName: \"kubernetes.io/projected/319bfda0-51fb-4790-95eb-f1eed417deff-kube-api-access-4x6dz\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.926976 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:52 crc kubenswrapper[4745]: I0121 10:56:52.926989 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/319bfda0-51fb-4790-95eb-f1eed417deff-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.407661 4745 generic.go:334] "Generic (PLEG): container finished" podID="619fc0d2-35d7-4927-b904-5bf122e76d24" containerID="b49bf2369716e44450e48493ed12bfa8b7e4216a4ceb1de2bdf1dd6a7dd11320" exitCode=0 Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.407730 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cr2xq" event={"ID":"619fc0d2-35d7-4927-b904-5bf122e76d24","Type":"ContainerDied","Data":"b49bf2369716e44450e48493ed12bfa8b7e4216a4ceb1de2bdf1dd6a7dd11320"} Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.409695 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-ccz6s" event={"ID":"319bfda0-51fb-4790-95eb-f1eed417deff","Type":"ContainerDied","Data":"fb7ba9761132ce3b59773070677050b7cd065bd7521b490e6e17e12784781dcc"} Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.409728 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-ccz6s" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.409728 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb7ba9761132ce3b59773070677050b7cd065bd7521b490e6e17e12784781dcc" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.791505 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-kzpw7"] Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.793431 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773f1b49-1207-44fe-ba15-ee0186030684" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.793554 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="773f1b49-1207-44fe-ba15-ee0186030684" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.793680 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.793758 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.793836 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.793896 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.793988 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="dnsmasq-dns" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.794063 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="dnsmasq-dns" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.794132 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95ab45a-aa5c-48af-8e3d-1a8900427471" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.794196 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95ab45a-aa5c-48af-8e3d-1a8900427471" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.794274 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c2e56ea-b70a-4562-87ae-9811198d1c96" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.794333 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c2e56ea-b70a-4562-87ae-9811198d1c96" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.794418 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edaf1847-278a-4826-a868-c5923e1ea872" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.794484 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="edaf1847-278a-4826-a868-c5923e1ea872" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.794569 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="init" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.794662 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="init" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.794745 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="269d4758-9e42-46a9-9e75-b2fee912d2fd" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.799937 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="269d4758-9e42-46a9-9e75-b2fee912d2fd" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.799998 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="076c482a-90e9-4db9-aa66-85e7d6a1ad3b" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.800007 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="076c482a-90e9-4db9-aa66-85e7d6a1ad3b" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: E0121 10:56:53.800023 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="319bfda0-51fb-4790-95eb-f1eed417deff" containerName="keystone-db-sync" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.800030 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="319bfda0-51fb-4790-95eb-f1eed417deff" containerName="keystone-db-sync" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823238 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="076c482a-90e9-4db9-aa66-85e7d6a1ad3b" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823314 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="edaf1847-278a-4826-a868-c5923e1ea872" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823330 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="269d4758-9e42-46a9-9e75-b2fee912d2fd" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823353 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823373 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="319bfda0-51fb-4790-95eb-f1eed417deff" containerName="keystone-db-sync" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823382 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c2e56ea-b70a-4562-87ae-9811198d1c96" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823398 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95ab45a-aa5c-48af-8e3d-1a8900427471" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823412 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cbf99fa-2e7c-4076-91fe-5bf2a4c28c51" containerName="dnsmasq-dns" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823424 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="773f1b49-1207-44fe-ba15-ee0186030684" containerName="mariadb-account-create-update" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.823439 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" containerName="mariadb-database-create" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.871272 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.872482 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.886305 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.886498 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.886643 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2rgkp" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.886766 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.886865 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.892367 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kzpw7"] Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.892846 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.897166 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.914844 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-rrpzk"] Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.916258 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.924068 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.924387 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-slsds" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.939704 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rrpzk"] Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951233 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951309 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951357 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951418 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951509 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951607 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdbtt\" (UniqueName: \"kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951665 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951708 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdg9m\" (UniqueName: \"kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951760 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951779 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951856 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:53 crc kubenswrapper[4745]: I0121 10:56:53.951885 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.052938 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.052984 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053020 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053047 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ddd2\" (UniqueName: \"kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053070 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053095 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053135 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdbtt\" (UniqueName: \"kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053201 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053221 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdg9m\" (UniqueName: \"kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053280 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.053366 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.054213 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.055719 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.056215 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.058116 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.060555 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.068308 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.074199 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.075374 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.077002 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.087979 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.103101 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.103293 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5k74l" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.103402 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.103510 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.104969 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.110500 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.140242 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdbtt\" (UniqueName: \"kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt\") pod \"dnsmasq-dns-5959f8865f-chmcx\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.141214 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdg9m\" (UniqueName: \"kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m\") pod \"keystone-bootstrap-kzpw7\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.143746 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155818 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155901 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155917 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155945 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtjg\" (UniqueName: \"kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155968 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.155993 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.156016 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.156078 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ddd2\" (UniqueName: \"kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.165804 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.180353 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.209085 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.210209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ddd2\" (UniqueName: \"kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2\") pod \"heat-db-sync-rrpzk\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.221387 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-6x5s4"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.222623 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.254030 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rrpzk" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.258448 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.258480 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.258513 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmtjg\" (UniqueName: \"kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.258932 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.258969 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.259771 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.260316 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.260416 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.265499 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.268633 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.273170 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6x5s4"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.283554 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-km4vv" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.283935 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.284040 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.307717 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-tsql6"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.308836 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.339691 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rhfch" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.346730 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.361815 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.361897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.361924 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.361974 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.361994 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.362428 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.362462 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvxcq\" (UniqueName: \"kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.362501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnskw\" (UniqueName: \"kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.362521 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.375100 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmtjg\" (UniqueName: \"kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg\") pod \"horizon-89995c44c-6c5zt\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.391440 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tsql6"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.448224 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.462116 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463414 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnskw\" (UniqueName: \"kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463488 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463517 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463552 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463576 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463593 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463634 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.463663 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvxcq\" (UniqueName: \"kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.472302 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.481272 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.505872 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.506274 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.506293 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.513209 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.523026 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.524197 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnskw\" (UniqueName: \"kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw\") pod \"cinder-db-sync-6x5s4\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.555646 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvxcq\" (UniqueName: \"kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq\") pod \"barbican-db-sync-tsql6\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.567274 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.567337 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.567373 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr4rf\" (UniqueName: \"kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.567399 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.567437 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.588734 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-pgh4g"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.590261 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.601060 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.601260 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mgwzl" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.606807 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.619277 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.638598 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.654450 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670598 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670657 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670679 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670718 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts8s6\" (UniqueName: \"kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670747 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670812 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670848 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.670880 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr4rf\" (UniqueName: \"kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.674513 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.675462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.675756 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.676140 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.685441 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.687504 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.711885 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.711978 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsql6" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.718608 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pgh4g"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.727273 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.777616 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.777926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778085 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778355 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778443 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778567 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778670 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5lc6\" (UniqueName: \"kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778772 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.778883 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts8s6\" (UniqueName: \"kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.782873 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.845739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.879743 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883581 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883665 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883723 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883755 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883809 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883834 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5lc6\" (UniqueName: \"kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.883867 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.892377 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr4rf\" (UniqueName: \"kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf\") pod \"horizon-57b567c785-rppfm\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.915133 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts8s6\" (UniqueName: \"kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6\") pod \"neutron-db-sync-pgh4g\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.915705 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.915969 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.926788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.971865 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:56:54 crc kubenswrapper[4745]: I0121 10:56:54.986442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.066072 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.066972 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.082563 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5lc6\" (UniqueName: \"kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6\") pod \"ceilometer-0\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " pod="openstack/ceilometer-0" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.096452 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.126746 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.170313 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.394821 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hj5fq"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.434968 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hj5fq"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.435111 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.440514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.477934 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4wdzz" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.478159 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.500806 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.504159 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.513479 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.577177 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.577663 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.577744 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.577806 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.577871 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxjj9\" (UniqueName: \"kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.592857 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-kzpw7"] Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681275 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681382 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681456 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681497 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxjj9\" (UniqueName: \"kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681614 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjgts\" (UniqueName: \"kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681690 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681745 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681798 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681870 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.681986 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.682052 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.692650 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.700940 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.703629 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.729834 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxjj9\" (UniqueName: \"kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9\") pod \"placement-db-sync-hj5fq\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785094 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785159 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785192 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785245 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785299 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.785323 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjgts\" (UniqueName: \"kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.786160 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.786657 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.786780 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.788039 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.793134 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.814124 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjgts\" (UniqueName: \"kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts\") pod \"dnsmasq-dns-58dd9ff6bc-b2g9z\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.819760 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hj5fq" Jan 21 10:56:55 crc kubenswrapper[4745]: I0121 10:56:55.864069 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.182499 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.225365 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data\") pod \"619fc0d2-35d7-4927-b904-5bf122e76d24\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.225662 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data\") pod \"619fc0d2-35d7-4927-b904-5bf122e76d24\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.225807 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle\") pod \"619fc0d2-35d7-4927-b904-5bf122e76d24\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.226160 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xlzc\" (UniqueName: \"kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc\") pod \"619fc0d2-35d7-4927-b904-5bf122e76d24\" (UID: \"619fc0d2-35d7-4927-b904-5bf122e76d24\") " Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.249121 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "619fc0d2-35d7-4927-b904-5bf122e76d24" (UID: "619fc0d2-35d7-4927-b904-5bf122e76d24"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.253889 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc" (OuterVolumeSpecName: "kube-api-access-6xlzc") pod "619fc0d2-35d7-4927-b904-5bf122e76d24" (UID: "619fc0d2-35d7-4927-b904-5bf122e76d24"). InnerVolumeSpecName "kube-api-access-6xlzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.273563 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "619fc0d2-35d7-4927-b904-5bf122e76d24" (UID: "619fc0d2-35d7-4927-b904-5bf122e76d24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.347885 4745 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.347927 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.347939 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xlzc\" (UniqueName: \"kubernetes.io/projected/619fc0d2-35d7-4927-b904-5bf122e76d24-kube-api-access-6xlzc\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.372588 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data" (OuterVolumeSpecName: "config-data") pod "619fc0d2-35d7-4927-b904-5bf122e76d24" (UID: "619fc0d2-35d7-4927-b904-5bf122e76d24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.456458 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/619fc0d2-35d7-4927-b904-5bf122e76d24-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.529860 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.547684 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-cr2xq" event={"ID":"619fc0d2-35d7-4927-b904-5bf122e76d24","Type":"ContainerDied","Data":"742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7"} Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.547733 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="742e17788d05d837e654cdf618a9f628f22882669601def2000d4878660b0ba7" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.547823 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-cr2xq" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.571572 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-rrpzk"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.571960 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89995c44c-6c5zt" event={"ID":"de898638-1d3c-459e-8844-326d868c0852","Type":"ContainerStarted","Data":"4c42499a1a53607c9cacd52d4fe0a045c7e75e15e200e56e68f3511d24324128"} Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.581700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kzpw7" event={"ID":"7ac63b29-670c-44c6-bd89-828ee65aa0e0","Type":"ContainerStarted","Data":"6ef26153aa55b357d813b014c35776fa0255781b742cd0ac4cd65328bcc16dd0"} Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.581771 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kzpw7" event={"ID":"7ac63b29-670c-44c6-bd89-828ee65aa0e0","Type":"ContainerStarted","Data":"fc786a572a102776fc588fe7d989e85faa6b66b31bdf7a232810134894e46aca"} Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.607114 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-6x5s4"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.635616 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-pgh4g"] Jan 21 10:56:56 crc kubenswrapper[4745]: W0121 10:56:56.650351 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd73366f6_2bea_40d0_ae8b_8cb61cdd78aa.slice/crio-fa82e00c47dc3703cd13a07a72653a0111b0dd74404dd1551b323ef997e264a5 WatchSource:0}: Error finding container fa82e00c47dc3703cd13a07a72653a0111b0dd74404dd1551b323ef997e264a5: Status 404 returned error can't find the container with id fa82e00c47dc3703cd13a07a72653a0111b0dd74404dd1551b323ef997e264a5 Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.671672 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:56 crc kubenswrapper[4745]: W0121 10:56:56.675481 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267909cf_90b8_451d_9882_715e44dc2c30.slice/crio-774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf WatchSource:0}: Error finding container 774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf: Status 404 returned error can't find the container with id 774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.687692 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-tsql6"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.690420 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-kzpw7" podStartSLOduration=3.6903961389999997 podStartE2EDuration="3.690396139s" podCreationTimestamp="2026-01-21 10:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:56.625290581 +0000 UTC m=+1201.086078179" watchObservedRunningTime="2026-01-21 10:56:56.690396139 +0000 UTC m=+1201.151183737" Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.705582 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.709639 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:56:56 crc kubenswrapper[4745]: I0121 10:56:56.992790 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.146946 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hj5fq"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.159017 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.239963 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:56:57 crc kubenswrapper[4745]: E0121 10:56:57.240347 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" containerName="glance-db-sync" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.240362 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" containerName="glance-db-sync" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.240558 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" containerName="glance-db-sync" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.241550 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.310182 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.321309 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.321389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj76s\" (UniqueName: \"kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.321433 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.321455 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.321476 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.336744 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.429391 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.452658 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.462478 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj76s\" (UniqueName: \"kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.463145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.464555 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.465827 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.466747 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.470272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.502136 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.546646 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj76s\" (UniqueName: \"kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s\") pod \"horizon-5b7d8c9d7c-75nls\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.555800 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.745281 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pgh4g" event={"ID":"006a9d44-bc1a-41ce-8103-591327ca1afa","Type":"ContainerStarted","Data":"8d8a251e7b5d69a53b116b8e11b746251af689e7f608a5fb34c902fd447b3c3e"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.768732 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hj5fq" event={"ID":"be0086c8-abfc-4740-9d81-62eab45e6507","Type":"ContainerStarted","Data":"877dcdd850aeedc2b3d6eb12da7a1760363420b3e605b8ff7b3d33d07106c6ed"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.804331 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rrpzk" event={"ID":"939e01d6-c378-485e-bd8c-8d394151ef3b","Type":"ContainerStarted","Data":"ec2895213c20acf7ad2fd71deb9dfb145ccd5e99f77af42953bda7c2d615fb15"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.845947 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsql6" event={"ID":"267909cf-90b8-451d-9882-715e44dc2c30","Type":"ContainerStarted","Data":"774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.876150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" event={"ID":"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa","Type":"ContainerStarted","Data":"fa82e00c47dc3703cd13a07a72653a0111b0dd74404dd1551b323ef997e264a5"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.943554 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b567c785-rppfm" event={"ID":"0fe64492-fe0a-4f98-b1c6-69a555f6d19f","Type":"ContainerStarted","Data":"0089ccca1cc172a72c3a4cd0e8e86532e9585cc920094c97b8c051ec05c8ca10"} Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.993477 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:56:57 crc kubenswrapper[4745]: I0121 10:56:57.993837 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" event={"ID":"879f577c-2f73-4b10-8754-ffaa7af8f361","Type":"ContainerStarted","Data":"2c9f00e711aff85ca5bb006c994cc42b2518fd66e6fbdd6b1bb97a82aa69f720"} Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.090340 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6x5s4" event={"ID":"9ac43469-c72e-486a-80bf-f6de6bdfa199","Type":"ContainerStarted","Data":"2318de45fdbcf53dd26657dd68e7c2b50bcaf2fcc9754e4237f90ad4084d5f81"} Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.090393 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.099273 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerStarted","Data":"842dc5e4a3ef20996a3954f8915b120bd61ac8984bc15b18992cd2bc9e372c15"} Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.099336 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.099415 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247159 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247627 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247728 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247773 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh724\" (UniqueName: \"kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247831 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.247860 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351111 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351212 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351244 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351312 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh724\" (UniqueName: \"kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.351439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.352835 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.353211 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.353294 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.353475 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.353886 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.379551 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh724\" (UniqueName: \"kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724\") pod \"dnsmasq-dns-785d8bcb8c-pd5zm\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.512178 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.818297 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.820247 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.822784 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.828722 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.829114 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6gq8x" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.838832 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.913848 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.948871 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.989109 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.989409 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.990425 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.990522 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:58 crc kubenswrapper[4745]: I0121 10:56:58.990670 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:58.999945 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdbtt\" (UniqueName: \"kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt\") pod \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\" (UID: \"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa\") " Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.000635 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001114 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001184 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6lbq\" (UniqueName: \"kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001302 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.001545 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.018634 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt" (OuterVolumeSpecName: "kube-api-access-mdbtt") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "kube-api-access-mdbtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.090981 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.093172 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.104817 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.104856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.104877 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6lbq\" (UniqueName: \"kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.104918 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.104952 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105000 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105106 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105118 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105130 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdbtt\" (UniqueName: \"kubernetes.io/projected/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-kube-api-access-mdbtt\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.105522 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.106697 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.114590 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.118094 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config" (OuterVolumeSpecName: "config") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.119656 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.145424 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.146351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.147195 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.183735 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6lbq\" (UniqueName: \"kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.196689 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" (UID: "d73366f6-2bea-40d0-ae8b-8cb61cdd78aa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.232253 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.232293 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.232312 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.257678 4745 generic.go:334] "Generic (PLEG): container finished" podID="879f577c-2f73-4b10-8754-ffaa7af8f361" containerID="d8579a7e92aed54f52ea4d65eb79e3dc0371ee11ad89b587661183f0884913e2" exitCode=0 Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.257796 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" event={"ID":"879f577c-2f73-4b10-8754-ffaa7af8f361","Type":"ContainerDied","Data":"d8579a7e92aed54f52ea4d65eb79e3dc0371ee11ad89b587661183f0884913e2"} Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.278690 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.293000 4745 generic.go:334] "Generic (PLEG): container finished" podID="d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" containerID="5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66" exitCode=0 Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.293125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" event={"ID":"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa","Type":"ContainerDied","Data":"5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66"} Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.293152 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" event={"ID":"d73366f6-2bea-40d0-ae8b-8cb61cdd78aa","Type":"ContainerDied","Data":"fa82e00c47dc3703cd13a07a72653a0111b0dd74404dd1551b323ef997e264a5"} Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.293170 4745 scope.go:117] "RemoveContainer" containerID="5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.293293 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-chmcx" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.296939 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:56:59 crc kubenswrapper[4745]: E0121 10:56:59.297422 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" containerName="init" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.297448 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" containerName="init" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.297651 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" containerName="init" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.302785 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.309212 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.311700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b7d8c9d7c-75nls" event={"ID":"d966de46-956a-482d-8960-4a41cbd53762","Type":"ContainerStarted","Data":"6f86c344e067f202e3191bdd9e098e66e1035146d4ac329b5f32f80f75f5b12d"} Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.323142 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pgh4g" event={"ID":"006a9d44-bc1a-41ce-8103-591327ca1afa","Type":"ContainerStarted","Data":"bc7e126930deceee5930e454d0bbcce31f72426de62cde678d37ff82abe2e933"} Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.326964 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449275 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449358 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449736 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjzlb\" (UniqueName: \"kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449763 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.449894 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.450179 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.496228 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.503728 4745 scope.go:117] "RemoveContainer" containerID="5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.503924 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:59 crc kubenswrapper[4745]: E0121 10:56:59.513161 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66\": container with ID starting with 5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66 not found: ID does not exist" containerID="5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.513208 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66"} err="failed to get container status \"5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66\": rpc error: code = NotFound desc = could not find container \"5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66\": container with ID starting with 5d5c41c937a029a7a7a6fad7aa37d372aa7ae0b6f6359917c55cb1cb810b2d66 not found: ID does not exist" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.517987 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-chmcx"] Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.521729 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-pgh4g" podStartSLOduration=5.521714142 podStartE2EDuration="5.521714142s" podCreationTimestamp="2026-01-21 10:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:56:59.455662841 +0000 UTC m=+1203.916450449" watchObservedRunningTime="2026-01-21 10:56:59.521714142 +0000 UTC m=+1203.982501740" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.553796 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.553855 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjzlb\" (UniqueName: \"kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.553873 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.553914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.553980 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.554023 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.554063 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.560899 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.561131 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.561242 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.576068 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.578418 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.580731 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.594079 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.607899 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjzlb\" (UniqueName: \"kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.614508 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.643268 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:56:59 crc kubenswrapper[4745]: I0121 10:56:59.884626 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.033480 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d73366f6-2bea-40d0-ae8b-8cb61cdd78aa" path="/var/lib/kubelet/pods/d73366f6-2bea-40d0-ae8b-8cb61cdd78aa/volumes" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.080565 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.081158 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.081258 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjgts\" (UniqueName: \"kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.081279 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.081301 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.081321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb\") pod \"879f577c-2f73-4b10-8754-ffaa7af8f361\" (UID: \"879f577c-2f73-4b10-8754-ffaa7af8f361\") " Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.107705 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts" (OuterVolumeSpecName: "kube-api-access-vjgts") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "kube-api-access-vjgts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.182218 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.184638 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.189768 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjgts\" (UniqueName: \"kubernetes.io/projected/879f577c-2f73-4b10-8754-ffaa7af8f361-kube-api-access-vjgts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.189789 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.189798 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.217171 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.224704 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.246938 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config" (OuterVolumeSpecName: "config") pod "879f577c-2f73-4b10-8754-ffaa7af8f361" (UID: "879f577c-2f73-4b10-8754-ffaa7af8f361"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.291426 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.292487 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.292502 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/879f577c-2f73-4b10-8754-ffaa7af8f361-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.377849 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.393169 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" event={"ID":"b28e59ca-5792-4c1a-a96c-e6aee4f83026","Type":"ContainerStarted","Data":"cc6fc8be6b4382aa6e25c8185962b59b810ad42f3af895873301f7047b068ee6"} Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.410174 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.410230 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-b2g9z" event={"ID":"879f577c-2f73-4b10-8754-ffaa7af8f361","Type":"ContainerDied","Data":"2c9f00e711aff85ca5bb006c994cc42b2518fd66e6fbdd6b1bb97a82aa69f720"} Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.410276 4745 scope.go:117] "RemoveContainer" containerID="d8579a7e92aed54f52ea4d65eb79e3dc0371ee11ad89b587661183f0884913e2" Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.586599 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.595228 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-b2g9z"] Jan 21 10:57:00 crc kubenswrapper[4745]: I0121 10:57:00.996898 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:01 crc kubenswrapper[4745]: I0121 10:57:01.445979 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerStarted","Data":"4502a865c74ef09a295911f1afbef60b9852c2e87670c6659638d58aa2c4ac63"} Jan 21 10:57:01 crc kubenswrapper[4745]: I0121 10:57:01.457565 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerStarted","Data":"bfa4db5f6956178f2df20c21d15ce5db8c7aab98872645c47b8bd09c16c780c9"} Jan 21 10:57:02 crc kubenswrapper[4745]: I0121 10:57:02.050878 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879f577c-2f73-4b10-8754-ffaa7af8f361" path="/var/lib/kubelet/pods/879f577c-2f73-4b10-8754-ffaa7af8f361/volumes" Jan 21 10:57:02 crc kubenswrapper[4745]: I0121 10:57:02.486680 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerStarted","Data":"5a73f82f1b9e366bf0a44de8640208cc967a9a4d2fca33b678991e2c0f1fa438"} Jan 21 10:57:02 crc kubenswrapper[4745]: I0121 10:57:02.490471 4745 generic.go:334] "Generic (PLEG): container finished" podID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerID="186bf3942911ebf7cd088b81b1e7b198a1d05796d499a5e5ae81cf61a35050bf" exitCode=0 Jan 21 10:57:02 crc kubenswrapper[4745]: I0121 10:57:02.490599 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" event={"ID":"b28e59ca-5792-4c1a-a96c-e6aee4f83026","Type":"ContainerDied","Data":"186bf3942911ebf7cd088b81b1e7b198a1d05796d499a5e5ae81cf61a35050bf"} Jan 21 10:57:03 crc kubenswrapper[4745]: I0121 10:57:03.513495 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" event={"ID":"b28e59ca-5792-4c1a-a96c-e6aee4f83026","Type":"ContainerStarted","Data":"6793da3a559ea662747b2d18ef5f34684874ba791e0597a0b93dae353561cd83"} Jan 21 10:57:03 crc kubenswrapper[4745]: I0121 10:57:03.514647 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:57:03 crc kubenswrapper[4745]: I0121 10:57:03.521705 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerStarted","Data":"a473e0ec491d77d0beebcd7ee11799dd9660bcc5d151acec54bcbf211305ce7e"} Jan 21 10:57:03 crc kubenswrapper[4745]: I0121 10:57:03.529703 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerStarted","Data":"dea7caf435c9b643b9bf23332e305ff45576afe0b578975a4cb2d3b582c867a8"} Jan 21 10:57:03 crc kubenswrapper[4745]: I0121 10:57:03.550716 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" podStartSLOduration=6.550689694 podStartE2EDuration="6.550689694s" podCreationTimestamp="2026-01-21 10:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:03.545109302 +0000 UTC m=+1208.005896900" watchObservedRunningTime="2026-01-21 10:57:03.550689694 +0000 UTC m=+1208.011477292" Jan 21 10:57:04 crc kubenswrapper[4745]: I0121 10:57:04.543584 4745 generic.go:334] "Generic (PLEG): container finished" podID="7ac63b29-670c-44c6-bd89-828ee65aa0e0" containerID="6ef26153aa55b357d813b014c35776fa0255781b742cd0ac4cd65328bcc16dd0" exitCode=0 Jan 21 10:57:04 crc kubenswrapper[4745]: I0121 10:57:04.544053 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kzpw7" event={"ID":"7ac63b29-670c-44c6-bd89-828ee65aa0e0","Type":"ContainerDied","Data":"6ef26153aa55b357d813b014c35776fa0255781b742cd0ac4cd65328bcc16dd0"} Jan 21 10:57:04 crc kubenswrapper[4745]: I0121 10:57:04.546836 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerStarted","Data":"2d31fbe30e0d47b89e60e90281e3aae931596953044362656235f8db6b23a4ab"} Jan 21 10:57:04 crc kubenswrapper[4745]: I0121 10:57:04.608337 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.608312938 podStartE2EDuration="7.608312938s" podCreationTimestamp="2026-01-21 10:56:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:04.588848423 +0000 UTC m=+1209.049636011" watchObservedRunningTime="2026-01-21 10:57:04.608312938 +0000 UTC m=+1209.069100536" Jan 21 10:57:04 crc kubenswrapper[4745]: I0121 10:57:04.629180 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.629159099 podStartE2EDuration="6.629159099s" podCreationTimestamp="2026-01-21 10:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:04.621054012 +0000 UTC m=+1209.081841600" watchObservedRunningTime="2026-01-21 10:57:04.629159099 +0000 UTC m=+1209.089946697" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.046365 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123054 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123115 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123193 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123351 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdg9m\" (UniqueName: \"kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123436 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.123579 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data\") pod \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\" (UID: \"7ac63b29-670c-44c6-bd89-828ee65aa0e0\") " Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.132095 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m" (OuterVolumeSpecName: "kube-api-access-rdg9m") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "kube-api-access-rdg9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.132252 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.140981 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.179782 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts" (OuterVolumeSpecName: "scripts") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.192836 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data" (OuterVolumeSpecName: "config-data") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.211197 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ac63b29-670c-44c6-bd89-828ee65aa0e0" (UID: "7ac63b29-670c-44c6-bd89-828ee65aa0e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227039 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227093 4745 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227103 4745 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227117 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227128 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdg9m\" (UniqueName: \"kubernetes.io/projected/7ac63b29-670c-44c6-bd89-828ee65aa0e0-kube-api-access-rdg9m\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.227139 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ac63b29-670c-44c6-bd89-828ee65aa0e0-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.387322 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.387873 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-log" containerID="cri-o://5a73f82f1b9e366bf0a44de8640208cc967a9a4d2fca33b678991e2c0f1fa438" gracePeriod=30 Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.388375 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-httpd" containerID="cri-o://dea7caf435c9b643b9bf23332e305ff45576afe0b578975a4cb2d3b582c867a8" gracePeriod=30 Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.518801 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.535603 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.535991 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-log" containerID="cri-o://a473e0ec491d77d0beebcd7ee11799dd9660bcc5d151acec54bcbf211305ce7e" gracePeriod=30 Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.536234 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-httpd" containerID="cri-o://2d31fbe30e0d47b89e60e90281e3aae931596953044362656235f8db6b23a4ab" gracePeriod=30 Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.650054 4745 generic.go:334] "Generic (PLEG): container finished" podID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerID="5a73f82f1b9e366bf0a44de8640208cc967a9a4d2fca33b678991e2c0f1fa438" exitCode=143 Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.650349 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerDied","Data":"5a73f82f1b9e366bf0a44de8640208cc967a9a4d2fca33b678991e2c0f1fa438"} Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.664936 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-kzpw7" event={"ID":"7ac63b29-670c-44c6-bd89-828ee65aa0e0","Type":"ContainerDied","Data":"fc786a572a102776fc588fe7d989e85faa6b66b31bdf7a232810134894e46aca"} Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.665006 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc786a572a102776fc588fe7d989e85faa6b66b31bdf7a232810134894e46aca" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.665082 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-kzpw7" Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.705687 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:57:08 crc kubenswrapper[4745]: I0121 10:57:08.706029 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" containerID="cri-o://aa6e2bb609344a3ea8c7cca200f3e1920233bef634dc5d5ab4f406f0bfd6ba4d" gracePeriod=10 Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.242982 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.345523 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 10:57:09 crc kubenswrapper[4745]: E0121 10:57:09.346247 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879f577c-2f73-4b10-8754-ffaa7af8f361" containerName="init" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.347097 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="879f577c-2f73-4b10-8754-ffaa7af8f361" containerName="init" Jan 21 10:57:09 crc kubenswrapper[4745]: E0121 10:57:09.347217 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ac63b29-670c-44c6-bd89-828ee65aa0e0" containerName="keystone-bootstrap" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.347286 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ac63b29-670c-44c6-bd89-828ee65aa0e0" containerName="keystone-bootstrap" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.347510 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="879f577c-2f73-4b10-8754-ffaa7af8f361" containerName="init" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.359034 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ac63b29-670c-44c6-bd89-828ee65aa0e0" containerName="keystone-bootstrap" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.360587 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.367069 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.417589 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-kzpw7"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.448978 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-kzpw7"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472142 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5v2s\" (UniqueName: \"kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472193 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472230 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472274 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472300 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472337 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.472367 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.491308 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.515754 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.563201 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-45lw5"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.564273 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576613 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5v2s\" (UniqueName: \"kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576669 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576706 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576758 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576784 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576818 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.576846 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.578321 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.580258 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.581283 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.594005 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.594129 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2rgkp" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.594388 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.594528 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.596779 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cdbfc4d4d-pm6ln"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.598355 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.598888 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.600243 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.605756 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.606994 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.625457 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5v2s\" (UniqueName: \"kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s\") pod \"horizon-78cb545d88-xv4bf\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.634940 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-45lw5"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.676832 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cdbfc4d4d-pm6ln"] Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.682787 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dztw\" (UniqueName: \"kubernetes.io/projected/1b30531d-e957-4efd-b09c-d5d0b5fd1382-kube-api-access-8dztw\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.682907 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-combined-ca-bundle\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.682942 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-tls-certs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.682998 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683041 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683094 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-secret-key\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683285 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrfq\" (UniqueName: \"kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683330 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-scripts\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683483 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683632 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-config-data\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683668 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683703 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b30531d-e957-4efd-b09c-d5d0b5fd1382-logs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.683976 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.708485 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.734059 4745 generic.go:334] "Generic (PLEG): container finished" podID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerID="aa6e2bb609344a3ea8c7cca200f3e1920233bef634dc5d5ab4f406f0bfd6ba4d" exitCode=0 Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.734134 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" event={"ID":"1113c34b-a9b5-4849-b1d8-b46b4e622841","Type":"ContainerDied","Data":"aa6e2bb609344a3ea8c7cca200f3e1920233bef634dc5d5ab4f406f0bfd6ba4d"} Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.754046 4745 generic.go:334] "Generic (PLEG): container finished" podID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerID="a473e0ec491d77d0beebcd7ee11799dd9660bcc5d151acec54bcbf211305ce7e" exitCode=143 Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.754139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerDied","Data":"a473e0ec491d77d0beebcd7ee11799dd9660bcc5d151acec54bcbf211305ce7e"} Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.782044 4745 generic.go:334] "Generic (PLEG): container finished" podID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerID="dea7caf435c9b643b9bf23332e305ff45576afe0b578975a4cb2d3b582c867a8" exitCode=0 Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.782130 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerDied","Data":"dea7caf435c9b643b9bf23332e305ff45576afe0b578975a4cb2d3b582c867a8"} Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788361 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788440 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-config-data\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788466 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788491 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b30531d-e957-4efd-b09c-d5d0b5fd1382-logs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788565 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788638 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dztw\" (UniqueName: \"kubernetes.io/projected/1b30531d-e957-4efd-b09c-d5d0b5fd1382-kube-api-access-8dztw\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788681 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-combined-ca-bundle\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788707 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-tls-certs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788746 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788784 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788815 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-secret-key\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztrfq\" (UniqueName: \"kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.788914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-scripts\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.789930 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-scripts\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.792573 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b30531d-e957-4efd-b09c-d5d0b5fd1382-logs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.797911 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-combined-ca-bundle\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.798995 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1b30531d-e957-4efd-b09c-d5d0b5fd1382-config-data\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.800162 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.821572 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dztw\" (UniqueName: \"kubernetes.io/projected/1b30531d-e957-4efd-b09c-d5d0b5fd1382-kube-api-access-8dztw\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.826391 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-secret-key\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.828400 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.832259 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.834188 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.834711 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztrfq\" (UniqueName: \"kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.836146 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts\") pod \"keystone-bootstrap-45lw5\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:09 crc kubenswrapper[4745]: I0121 10:57:09.837980 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b30531d-e957-4efd-b09c-d5d0b5fd1382-horizon-tls-certs\") pod \"horizon-5cdbfc4d4d-pm6ln\" (UID: \"1b30531d-e957-4efd-b09c-d5d0b5fd1382\") " pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:10 crc kubenswrapper[4745]: I0121 10:57:10.003904 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:57:10 crc kubenswrapper[4745]: I0121 10:57:10.028458 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ac63b29-670c-44c6-bd89-828ee65aa0e0" path="/var/lib/kubelet/pods/7ac63b29-670c-44c6-bd89-828ee65aa0e0/volumes" Jan 21 10:57:10 crc kubenswrapper[4745]: I0121 10:57:10.028926 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:57:10 crc kubenswrapper[4745]: I0121 10:57:10.799660 4745 generic.go:334] "Generic (PLEG): container finished" podID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerID="2d31fbe30e0d47b89e60e90281e3aae931596953044362656235f8db6b23a4ab" exitCode=0 Jan 21 10:57:10 crc kubenswrapper[4745]: I0121 10:57:10.799710 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerDied","Data":"2d31fbe30e0d47b89e60e90281e3aae931596953044362656235f8db6b23a4ab"} Jan 21 10:57:13 crc kubenswrapper[4745]: I0121 10:57:13.066931 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 10:57:15 crc kubenswrapper[4745]: I0121 10:57:15.866882 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:57:15 crc kubenswrapper[4745]: I0121 10:57:15.866962 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.022827 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.023183 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n655hf7h685h548h54fh78h546h5b4h5c6h646h59chdch59hd5h5bch58dh555h545h5fdhf6h5c7h64fh54h575hd4h97h86hf6h9bh78h646h5cfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dmtjg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-89995c44c-6c5zt_openstack(de898638-1d3c-459e-8844-326d868c0852): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.113033 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-89995c44c-6c5zt" podUID="de898638-1d3c-459e-8844-326d868c0852" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.133993 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.134306 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5cdh64bh55dh5cfh6dh5ch69h54dh547h574h55ch577h5d9h66ch66dhc6h5ch5c8h5ffh65hd9h6bh587h68dh564hb7h55hb5h55ch668h685h5f4q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vr4rf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-57b567c785-rppfm_openstack(0fe64492-fe0a-4f98-b1c6-69a555f6d19f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.136838 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-57b567c785-rppfm" podUID="0fe64492-fe0a-4f98-b1c6-69a555f6d19f" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.151574 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.151818 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n94h7dh686hb5h58dh5b8h56dhc5h5b5h676h5dbh5c8h68ch65fh9ch8h664h59ch69h548hf7hd8h65bhcch596h55dh645h699h89h56bh5cbh67cq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jj76s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5b7d8c9d7c-75nls_openstack(d966de46-956a-482d-8960-4a41cbd53762): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:17 crc kubenswrapper[4745]: E0121 10:57:17.154205 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5b7d8c9d7c-75nls" podUID="d966de46-956a-482d-8960-4a41cbd53762" Jan 21 10:57:18 crc kubenswrapper[4745]: I0121 10:57:18.073126 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 10:57:19 crc kubenswrapper[4745]: E0121 10:57:19.237064 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Jan 21 10:57:19 crc kubenswrapper[4745]: E0121 10:57:19.237803 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxjj9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-hj5fq_openstack(be0086c8-abfc-4740-9d81-62eab45e6507): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:19 crc kubenswrapper[4745]: E0121 10:57:19.239062 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-hj5fq" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" Jan 21 10:57:19 crc kubenswrapper[4745]: E0121 10:57:19.904460 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-hj5fq" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" Jan 21 10:57:23 crc kubenswrapper[4745]: I0121 10:57:23.067138 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 10:57:23 crc kubenswrapper[4745]: I0121 10:57:23.067768 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:57:28 crc kubenswrapper[4745]: I0121 10:57:28.067064 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: connect: connection refused" Jan 21 10:57:29 crc kubenswrapper[4745]: I0121 10:57:29.497242 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 10:57:29 crc kubenswrapper[4745]: I0121 10:57:29.497690 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 10:57:29 crc kubenswrapper[4745]: I0121 10:57:29.644307 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:29 crc kubenswrapper[4745]: I0121 10:57:29.644373 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:30 crc kubenswrapper[4745]: I0121 10:57:30.021669 4745 generic.go:334] "Generic (PLEG): container finished" podID="006a9d44-bc1a-41ce-8103-591327ca1afa" containerID="bc7e126930deceee5930e454d0bbcce31f72426de62cde678d37ff82abe2e933" exitCode=0 Jan 21 10:57:30 crc kubenswrapper[4745]: I0121 10:57:30.021766 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pgh4g" event={"ID":"006a9d44-bc1a-41ce-8103-591327ca1afa","Type":"ContainerDied","Data":"bc7e126930deceee5930e454d0bbcce31f72426de62cde678d37ff82abe2e933"} Jan 21 10:57:38 crc kubenswrapper[4745]: I0121 10:57:38.069048 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.071196 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.626475 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.631725 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.648943 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj76s\" (UniqueName: \"kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s\") pod \"d966de46-956a-482d-8960-4a41cbd53762\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674258 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data\") pod \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674280 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key\") pod \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674307 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr4rf\" (UniqueName: \"kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf\") pod \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674332 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs\") pod \"d966de46-956a-482d-8960-4a41cbd53762\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674363 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key\") pod \"d966de46-956a-482d-8960-4a41cbd53762\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674386 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs\") pod \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674431 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtjg\" (UniqueName: \"kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg\") pod \"de898638-1d3c-459e-8844-326d868c0852\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674450 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key\") pod \"de898638-1d3c-459e-8844-326d868c0852\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674492 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts\") pod \"de898638-1d3c-459e-8844-326d868c0852\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674546 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data\") pod \"d966de46-956a-482d-8960-4a41cbd53762\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674565 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts\") pod \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\" (UID: \"0fe64492-fe0a-4f98-b1c6-69a555f6d19f\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674581 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data\") pod \"de898638-1d3c-459e-8844-326d868c0852\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674802 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts\") pod \"d966de46-956a-482d-8960-4a41cbd53762\" (UID: \"d966de46-956a-482d-8960-4a41cbd53762\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.674818 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs\") pod \"de898638-1d3c-459e-8844-326d868c0852\" (UID: \"de898638-1d3c-459e-8844-326d868c0852\") " Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.676163 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data" (OuterVolumeSpecName: "config-data") pod "0fe64492-fe0a-4f98-b1c6-69a555f6d19f" (UID: "0fe64492-fe0a-4f98-b1c6-69a555f6d19f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.676158 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts" (OuterVolumeSpecName: "scripts") pod "0fe64492-fe0a-4f98-b1c6-69a555f6d19f" (UID: "0fe64492-fe0a-4f98-b1c6-69a555f6d19f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.676690 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts" (OuterVolumeSpecName: "scripts") pod "de898638-1d3c-459e-8844-326d868c0852" (UID: "de898638-1d3c-459e-8844-326d868c0852"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.676963 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data" (OuterVolumeSpecName: "config-data") pod "de898638-1d3c-459e-8844-326d868c0852" (UID: "de898638-1d3c-459e-8844-326d868c0852"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.677055 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data" (OuterVolumeSpecName: "config-data") pod "d966de46-956a-482d-8960-4a41cbd53762" (UID: "d966de46-956a-482d-8960-4a41cbd53762"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.677065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs" (OuterVolumeSpecName: "logs") pod "de898638-1d3c-459e-8844-326d868c0852" (UID: "de898638-1d3c-459e-8844-326d868c0852"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.677548 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts" (OuterVolumeSpecName: "scripts") pod "d966de46-956a-482d-8960-4a41cbd53762" (UID: "d966de46-956a-482d-8960-4a41cbd53762"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.677850 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs" (OuterVolumeSpecName: "logs") pod "0fe64492-fe0a-4f98-b1c6-69a555f6d19f" (UID: "0fe64492-fe0a-4f98-b1c6-69a555f6d19f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.678153 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs" (OuterVolumeSpecName: "logs") pod "d966de46-956a-482d-8960-4a41cbd53762" (UID: "d966de46-956a-482d-8960-4a41cbd53762"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.682801 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s" (OuterVolumeSpecName: "kube-api-access-jj76s") pod "d966de46-956a-482d-8960-4a41cbd53762" (UID: "d966de46-956a-482d-8960-4a41cbd53762"). InnerVolumeSpecName "kube-api-access-jj76s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.683210 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf" (OuterVolumeSpecName: "kube-api-access-vr4rf") pod "0fe64492-fe0a-4f98-b1c6-69a555f6d19f" (UID: "0fe64492-fe0a-4f98-b1c6-69a555f6d19f"). InnerVolumeSpecName "kube-api-access-vr4rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.684900 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg" (OuterVolumeSpecName: "kube-api-access-dmtjg") pod "de898638-1d3c-459e-8844-326d868c0852" (UID: "de898638-1d3c-459e-8844-326d868c0852"). InnerVolumeSpecName "kube-api-access-dmtjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.691552 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "de898638-1d3c-459e-8844-326d868c0852" (UID: "de898638-1d3c-459e-8844-326d868c0852"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.692041 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d966de46-956a-482d-8960-4a41cbd53762" (UID: "d966de46-956a-482d-8960-4a41cbd53762"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.692077 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "0fe64492-fe0a-4f98-b1c6-69a555f6d19f" (UID: "0fe64492-fe0a-4f98-b1c6-69a555f6d19f"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775735 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775775 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/de898638-1d3c-459e-8844-326d868c0852-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775785 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj76s\" (UniqueName: \"kubernetes.io/projected/d966de46-956a-482d-8960-4a41cbd53762-kube-api-access-jj76s\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775796 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775805 4745 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775814 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr4rf\" (UniqueName: \"kubernetes.io/projected/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-kube-api-access-vr4rf\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775823 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d966de46-956a-482d-8960-4a41cbd53762-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775831 4745 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d966de46-956a-482d-8960-4a41cbd53762-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775841 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775850 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmtjg\" (UniqueName: \"kubernetes.io/projected/de898638-1d3c-459e-8844-326d868c0852-kube-api-access-dmtjg\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775859 4745 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/de898638-1d3c-459e-8844-326d868c0852-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775867 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775874 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d966de46-956a-482d-8960-4a41cbd53762-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775883 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0fe64492-fe0a-4f98-b1c6-69a555f6d19f-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: I0121 10:57:43.775890 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/de898638-1d3c-459e-8844-326d868c0852-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:43 crc kubenswrapper[4745]: E0121 10:57:43.909429 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 21 10:57:43 crc kubenswrapper[4745]: E0121 10:57:43.909815 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6ddd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-rrpzk_openstack(939e01d6-c378-485e-bd8c-8d394151ef3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:43 crc kubenswrapper[4745]: E0121 10:57:43.911054 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-rrpzk" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.176766 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-57b567c785-rppfm" event={"ID":"0fe64492-fe0a-4f98-b1c6-69a555f6d19f","Type":"ContainerDied","Data":"0089ccca1cc172a72c3a4cd0e8e86532e9585cc920094c97b8c051ec05c8ca10"} Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.176860 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-57b567c785-rppfm" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.186895 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5b7d8c9d7c-75nls" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.186884 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5b7d8c9d7c-75nls" event={"ID":"d966de46-956a-482d-8960-4a41cbd53762","Type":"ContainerDied","Data":"6f86c344e067f202e3191bdd9e098e66e1035146d4ac329b5f32f80f75f5b12d"} Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.190076 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-89995c44c-6c5zt" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.190147 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-89995c44c-6c5zt" event={"ID":"de898638-1d3c-459e-8844-326d868c0852","Type":"ContainerDied","Data":"4c42499a1a53607c9cacd52d4fe0a045c7e75e15e200e56e68f3511d24324128"} Jan 21 10:57:44 crc kubenswrapper[4745]: E0121 10:57:44.191309 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-rrpzk" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.264336 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.289345 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-57b567c785-rppfm"] Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.311735 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.322372 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-89995c44c-6c5zt"] Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.336908 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.345058 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5b7d8c9d7c-75nls"] Jan 21 10:57:44 crc kubenswrapper[4745]: E0121 10:57:44.792912 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 21 10:57:44 crc kubenswrapper[4745]: E0121 10:57:44.793389 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rvxcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-tsql6_openstack(267909cf-90b8-451d-9882-715e44dc2c30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:44 crc kubenswrapper[4745]: E0121 10:57:44.796078 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-tsql6" podUID="267909cf-90b8-451d-9882-715e44dc2c30" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.906378 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.944808 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.958082 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:57:44 crc kubenswrapper[4745]: I0121 10:57:44.980736 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100109 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100213 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle\") pod \"006a9d44-bc1a-41ce-8103-591327ca1afa\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100288 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100332 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ts8s6\" (UniqueName: \"kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6\") pod \"006a9d44-bc1a-41ce-8103-591327ca1afa\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100423 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100467 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdhjh\" (UniqueName: \"kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100665 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjzlb\" (UniqueName: \"kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100718 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100756 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100820 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100844 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6lbq\" (UniqueName: \"kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100870 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100902 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100928 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config\") pod \"006a9d44-bc1a-41ce-8103-591327ca1afa\" (UID: \"006a9d44-bc1a-41ce-8103-591327ca1afa\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100950 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.100974 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101002 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101032 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101062 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb\") pod \"1113c34b-a9b5-4849-b1d8-b46b4e622841\" (UID: \"1113c34b-a9b5-4849-b1d8-b46b4e622841\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101089 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs\") pod \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\" (UID: \"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101120 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.101142 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle\") pod \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\" (UID: \"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21\") " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.106272 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb" (OuterVolumeSpecName: "kube-api-access-tjzlb") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "kube-api-access-tjzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.106982 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts" (OuterVolumeSpecName: "scripts") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.107247 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.107455 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs" (OuterVolumeSpecName: "logs") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.107729 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.121339 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.125061 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs" (OuterVolumeSpecName: "logs") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.129935 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts" (OuterVolumeSpecName: "scripts") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.133483 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq" (OuterVolumeSpecName: "kube-api-access-t6lbq") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "kube-api-access-t6lbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.133643 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config" (OuterVolumeSpecName: "config") pod "006a9d44-bc1a-41ce-8103-591327ca1afa" (UID: "006a9d44-bc1a-41ce-8103-591327ca1afa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.134103 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh" (OuterVolumeSpecName: "kube-api-access-mdhjh") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "kube-api-access-mdhjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.141961 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.142070 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6" (OuterVolumeSpecName: "kube-api-access-ts8s6") pod "006a9d44-bc1a-41ce-8103-591327ca1afa" (UID: "006a9d44-bc1a-41ce-8103-591327ca1afa"). InnerVolumeSpecName "kube-api-access-ts8s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.150094 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.157089 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "006a9d44-bc1a-41ce-8103-591327ca1afa" (UID: "006a9d44-bc1a-41ce-8103-591327ca1afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.171955 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.184541 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config" (OuterVolumeSpecName: "config") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.192553 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.195395 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.195985 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data" (OuterVolumeSpecName: "config-data") pod "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" (UID: "6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205316 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205346 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6lbq\" (UniqueName: \"kubernetes.io/projected/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-kube-api-access-t6lbq\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205366 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205376 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205385 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205393 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205401 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205410 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205419 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205427 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205434 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205447 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205456 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205466 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205476 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/006a9d44-bc1a-41ce-8103-591327ca1afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205485 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205493 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ts8s6\" (UniqueName: \"kubernetes.io/projected/006a9d44-bc1a-41ce-8103-591327ca1afa-kube-api-access-ts8s6\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205502 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205510 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdhjh\" (UniqueName: \"kubernetes.io/projected/1113c34b-a9b5-4849-b1d8-b46b4e622841-kube-api-access-mdhjh\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.205520 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjzlb\" (UniqueName: \"kubernetes.io/projected/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21-kube-api-access-tjzlb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.206186 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.206905 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-pgh4g" event={"ID":"006a9d44-bc1a-41ce-8103-591327ca1afa","Type":"ContainerDied","Data":"8d8a251e7b5d69a53b116b8e11b746251af689e7f608a5fb34c902fd447b3c3e"} Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.206939 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d8a251e7b5d69a53b116b8e11b746251af689e7f608a5fb34c902fd447b3c3e" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.207025 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-pgh4g" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.213779 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21","Type":"ContainerDied","Data":"4502a865c74ef09a295911f1afbef60b9852c2e87670c6659638d58aa2c4ac63"} Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.213874 4745 scope.go:117] "RemoveContainer" containerID="2d31fbe30e0d47b89e60e90281e3aae931596953044362656235f8db6b23a4ab" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.214226 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.231847 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data" (OuterVolumeSpecName: "config-data") pod "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" (UID: "40bb52db-87a4-4b7f-8425-7bbcbe3e2e77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.233031 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"40bb52db-87a4-4b7f-8425-7bbcbe3e2e77","Type":"ContainerDied","Data":"bfa4db5f6956178f2df20c21d15ce5db8c7aab98872645c47b8bd09c16c780c9"} Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.233964 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.252640 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.252845 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" event={"ID":"1113c34b-a9b5-4849-b1d8-b46b4e622841","Type":"ContainerDied","Data":"8bf462d9c9f0ad5068f21ac6a6bad0ff1f620a553e34bcc38177ac79b74366bb"} Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.261254 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-tsql6" podUID="267909cf-90b8-451d-9882-715e44dc2c30" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.288230 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1113c34b-a9b5-4849-b1d8-b46b4e622841" (UID: "1113c34b-a9b5-4849-b1d8-b46b4e622841"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.291026 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.305907 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.310775 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.311719 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.311783 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.311801 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.311885 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1113c34b-a9b5-4849-b1d8-b46b4e622841-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.341848 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.366023 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.385999 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.395370 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405301 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405731 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405750 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405766 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405772 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405787 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405793 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405802 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="init" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405811 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="init" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405821 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405828 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405847 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405853 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: E0121 10:57:45.405865 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="006a9d44-bc1a-41ce-8103-591327ca1afa" containerName="neutron-db-sync" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.405871 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="006a9d44-bc1a-41ce-8103-591327ca1afa" containerName="neutron-db-sync" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406045 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406057 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406071 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406079 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" containerName="glance-httpd" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406087 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="006a9d44-bc1a-41ce-8103-591327ca1afa" containerName="neutron-db-sync" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.406097 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" containerName="glance-log" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.407002 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.414289 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.415921 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.418829 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-6gq8x" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.419045 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.419296 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.424538 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.424938 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.427291 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.432328 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.449666 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521142 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521242 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521296 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521328 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521361 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521389 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zw4t\" (UniqueName: \"kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521445 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6p8\" (UniqueName: \"kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521466 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521491 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521519 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521572 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521602 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521624 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521668 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.521690 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.597611 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.605394 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-6chvk"] Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623103 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623165 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623206 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623234 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623251 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zw4t\" (UniqueName: \"kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623265 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623294 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6p8\" (UniqueName: \"kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623312 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623329 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623375 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623392 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623410 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623435 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623456 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.623495 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.624481 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.624634 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.625980 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.626319 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.626362 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.626477 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.631016 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.634343 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.635819 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.635919 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.636949 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.636969 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.637710 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.644488 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zw4t\" (UniqueName: \"kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.649070 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6p8\" (UniqueName: \"kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.656250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.666785 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.671232 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.745491 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.769051 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.866968 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:57:45 crc kubenswrapper[4745]: I0121 10:57:45.867115 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.044264 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe64492-fe0a-4f98-b1c6-69a555f6d19f" path="/var/lib/kubelet/pods/0fe64492-fe0a-4f98-b1c6-69a555f6d19f/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.044893 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" path="/var/lib/kubelet/pods/1113c34b-a9b5-4849-b1d8-b46b4e622841/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.045648 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40bb52db-87a4-4b7f-8425-7bbcbe3e2e77" path="/var/lib/kubelet/pods/40bb52db-87a4-4b7f-8425-7bbcbe3e2e77/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.046857 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21" path="/var/lib/kubelet/pods/6c26d1a0-6a7b-4344-91c0-3eba8b5e0e21/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.047842 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d966de46-956a-482d-8960-4a41cbd53762" path="/var/lib/kubelet/pods/d966de46-956a-482d-8960-4a41cbd53762/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.048495 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de898638-1d3c-459e-8844-326d868c0852" path="/var/lib/kubelet/pods/de898638-1d3c-459e-8844-326d868c0852/volumes" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.253567 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.257554 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.347679 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.364983 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.365068 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.365144 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.365174 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.365204 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdsn\" (UniqueName: \"kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.365293 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.407211 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.410798 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.413897 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.416284 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-mgwzl" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.416467 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.416686 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.419348 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468515 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468666 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468727 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ktr\" (UniqueName: \"kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468819 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468870 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.468975 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.469053 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.469091 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.469117 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.469134 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdsn\" (UniqueName: \"kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.469157 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.470188 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.470745 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.471265 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.471802 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.472295 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.501764 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdsn\" (UniqueName: \"kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn\") pod \"dnsmasq-dns-55f844cf75-ld94n\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.570883 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.570949 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ktr\" (UniqueName: \"kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.570995 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.571060 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.571151 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.575420 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.579613 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.585566 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.593232 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.596805 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ktr\" (UniqueName: \"kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr\") pod \"neutron-94bcb9f8b-t6knd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.605590 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:46 crc kubenswrapper[4745]: I0121 10:57:46.737167 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:47 crc kubenswrapper[4745]: E0121 10:57:47.341487 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 21 10:57:47 crc kubenswrapper[4745]: E0121 10:57:47.341956 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dnskw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-6x5s4_openstack(9ac43469-c72e-486a-80bf-f6de6bdfa199): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:57:47 crc kubenswrapper[4745]: E0121 10:57:47.343188 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-6x5s4" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" Jan 21 10:57:47 crc kubenswrapper[4745]: I0121 10:57:47.618136 4745 scope.go:117] "RemoveContainer" containerID="a473e0ec491d77d0beebcd7ee11799dd9660bcc5d151acec54bcbf211305ce7e" Jan 21 10:57:47 crc kubenswrapper[4745]: I0121 10:57:47.763429 4745 scope.go:117] "RemoveContainer" containerID="dea7caf435c9b643b9bf23332e305ff45576afe0b578975a4cb2d3b582c867a8" Jan 21 10:57:47 crc kubenswrapper[4745]: I0121 10:57:47.980384 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-45lw5"] Jan 21 10:57:47 crc kubenswrapper[4745]: I0121 10:57:47.986907 4745 scope.go:117] "RemoveContainer" containerID="5a73f82f1b9e366bf0a44de8640208cc967a9a4d2fca33b678991e2c0f1fa438" Jan 21 10:57:48 crc kubenswrapper[4745]: W0121 10:57:48.038168 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod444abf7d_45e7_490e_a1af_5a082b51a3af.slice/crio-4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba WatchSource:0}: Error finding container 4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba: Status 404 returned error can't find the container with id 4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.077388 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-764c5664d7-6chvk" podUID="1113c34b-a9b5-4849-b1d8-b46b4e622841" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.129:5353: i/o timeout" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.138392 4745 scope.go:117] "RemoveContainer" containerID="aa6e2bb609344a3ea8c7cca200f3e1920233bef634dc5d5ab4f406f0bfd6ba4d" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.160869 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.183821 4745 scope.go:117] "RemoveContainer" containerID="8f4376d0fb335ce955811966592aac2162de65de2eb5b47d2bd8d7baeeef058d" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.299175 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-45lw5" event={"ID":"444abf7d-45e7-490e-a1af-5a082b51a3af","Type":"ContainerStarted","Data":"4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba"} Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.303564 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerStarted","Data":"f5057c44b306f9577b4c2b7e2fdd495725e74b778ddc9965be15eb6af5f198b5"} Jan 21 10:57:48 crc kubenswrapper[4745]: E0121 10:57:48.331499 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-6x5s4" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.407410 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.494867 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cdbfc4d4d-pm6ln"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.581885 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.584055 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.595906 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.596146 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.640654 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646157 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646341 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht785\" (UniqueName: \"kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646474 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646515 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646594 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.646752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750190 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750742 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750801 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht785\" (UniqueName: \"kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750852 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750882 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750909 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.750965 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.764546 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.765093 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.766097 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.766748 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.772078 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.773165 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht785\" (UniqueName: \"kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.778864 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs\") pod \"neutron-68d7f877d9-dj8vd\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.800340 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:57:48 crc kubenswrapper[4745]: I0121 10:57:48.996653 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.049137 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.385247 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerStarted","Data":"2ea6b68ded7ab89c63c85148a2e1867d4128c69f150a1cccf1017660dd508855"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.385762 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerStarted","Data":"fef845246fa0e61775de7fcb7b5ed7a1a6024925b7b21f5243472c2aeba30e89"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.390968 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerStarted","Data":"dba86eae0cc21ac7b0bfd751427f3f52abd6e51c6ffe630bab2a7f8baab9e85c"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.392893 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-45lw5" event={"ID":"444abf7d-45e7-490e-a1af-5a082b51a3af","Type":"ContainerStarted","Data":"68c5cff5d9b6b515a71000aefd4a7bc7875a3525a7c8e2d6c70c406c3598993e"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.428396 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-45lw5" podStartSLOduration=40.428376625 podStartE2EDuration="40.428376625s" podCreationTimestamp="2026-01-21 10:57:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:49.421863201 +0000 UTC m=+1253.882650799" watchObservedRunningTime="2026-01-21 10:57:49.428376625 +0000 UTC m=+1253.889164223" Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.493030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 10:57:49 crc kubenswrapper[4745]: W0121 10:57:49.519681 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d1693d_5ab9_46b2_a4dd_de325b074f0f.slice/crio-cc0ed54d479a28cf536d4e719bbe5f526d910506f9cc6688c1387fec410d3a17 WatchSource:0}: Error finding container cc0ed54d479a28cf536d4e719bbe5f526d910506f9cc6688c1387fec410d3a17: Status 404 returned error can't find the container with id cc0ed54d479a28cf536d4e719bbe5f526d910506f9cc6688c1387fec410d3a17 Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.522034 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"af5bbdb5a8c8bb730e80c2465d1c0b94be4b24d00ed6afd838b4583c8a461e1e"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.542244 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerStarted","Data":"1bad4275339a90422ce7155a52421c1fbe91364387c2059f4b6f3fa7b83a770a"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.553276 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerStarted","Data":"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.567268 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hj5fq" event={"ID":"be0086c8-abfc-4740-9d81-62eab45e6507","Type":"ContainerStarted","Data":"108ce0cebeefd813918deebb94fde732e4663dab8560edf6a5b7c39d0f458ec8"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.578844 4745 generic.go:334] "Generic (PLEG): container finished" podID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerID="46f852bed121ee73121cecad77ba2e0f1575fe98982906baca392f6d52f46b57" exitCode=0 Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.578900 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" event={"ID":"c501f46e-be57-458d-bb01-a1db3aecbd93","Type":"ContainerDied","Data":"46f852bed121ee73121cecad77ba2e0f1575fe98982906baca392f6d52f46b57"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.578937 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" event={"ID":"c501f46e-be57-458d-bb01-a1db3aecbd93","Type":"ContainerStarted","Data":"5285de65f9fb63140296bf7ed69edc5b1b973f3abcb7436730bf0f4f63ca7811"} Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.611418 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hj5fq" podStartSLOduration=5.23995796 podStartE2EDuration="55.611397848s" podCreationTimestamp="2026-01-21 10:56:54 +0000 UTC" firstStartedPulling="2026-01-21 10:56:57.182694406 +0000 UTC m=+1201.643482004" lastFinishedPulling="2026-01-21 10:57:47.554134294 +0000 UTC m=+1252.014921892" observedRunningTime="2026-01-21 10:57:49.60518953 +0000 UTC m=+1254.065977148" watchObservedRunningTime="2026-01-21 10:57:49.611397848 +0000 UTC m=+1254.072185446" Jan 21 10:57:49 crc kubenswrapper[4745]: I0121 10:57:49.647462 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:57:49 crc kubenswrapper[4745]: W0121 10:57:49.660739 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode982fb4c_3818_4f04_b7ed_c32666261f07.slice/crio-d71081d39486a443031698e1dcb8bd5b439b66a6237d1b1b24d484dc86f5dc2d WatchSource:0}: Error finding container d71081d39486a443031698e1dcb8bd5b439b66a6237d1b1b24d484dc86f5dc2d: Status 404 returned error can't find the container with id d71081d39486a443031698e1dcb8bd5b439b66a6237d1b1b24d484dc86f5dc2d Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.639226 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerStarted","Data":"d71081d39486a443031698e1dcb8bd5b439b66a6237d1b1b24d484dc86f5dc2d"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.710975 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerStarted","Data":"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.711039 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerStarted","Data":"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.711055 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerStarted","Data":"cc0ed54d479a28cf536d4e719bbe5f526d910506f9cc6688c1387fec410d3a17"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.711364 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.744284 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" event={"ID":"c501f46e-be57-458d-bb01-a1db3aecbd93","Type":"ContainerStarted","Data":"e2ed3256fd45122a9544514b9b54e03818b3c771d057ec7cad3089e392f53dc6"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.745897 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.785426 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerStarted","Data":"3c0e701504d3132b3c1cbbfec9509408b319cceee1e8dcf2f7a753801a688187"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.787284 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.805523 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerStarted","Data":"8b5d6aff5cd21f1dab9c1e52236e926cbc75280823886bb39f699a251dbe75fe"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.821379 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68d7f877d9-dj8vd" podStartSLOduration=2.820941864 podStartE2EDuration="2.820941864s" podCreationTimestamp="2026-01-21 10:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:50.806152046 +0000 UTC m=+1255.266939644" watchObservedRunningTime="2026-01-21 10:57:50.820941864 +0000 UTC m=+1255.281729462" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.835649 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerStarted","Data":"db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906"} Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.846314 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" podStartSLOduration=4.846297996 podStartE2EDuration="4.846297996s" podCreationTimestamp="2026-01-21 10:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:50.841906008 +0000 UTC m=+1255.302693596" watchObservedRunningTime="2026-01-21 10:57:50.846297996 +0000 UTC m=+1255.307085594" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.900822 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-94bcb9f8b-t6knd" podStartSLOduration=4.9008043820000005 podStartE2EDuration="4.900804382s" podCreationTimestamp="2026-01-21 10:57:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:50.898400457 +0000 UTC m=+1255.359188055" watchObservedRunningTime="2026-01-21 10:57:50.900804382 +0000 UTC m=+1255.361591980" Jan 21 10:57:50 crc kubenswrapper[4745]: I0121 10:57:50.938563 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78cb545d88-xv4bf" podStartSLOduration=41.251811009 podStartE2EDuration="41.938504315s" podCreationTimestamp="2026-01-21 10:57:09 +0000 UTC" firstStartedPulling="2026-01-21 10:57:48.186585452 +0000 UTC m=+1252.647373040" lastFinishedPulling="2026-01-21 10:57:48.873278748 +0000 UTC m=+1253.334066346" observedRunningTime="2026-01-21 10:57:50.93793974 +0000 UTC m=+1255.398727338" watchObservedRunningTime="2026-01-21 10:57:50.938504315 +0000 UTC m=+1255.399291913" Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.848050 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerStarted","Data":"eba15377726a51f5cbf390f21c83a10b8e4315b98f346b4a1825da1de8255e12"} Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.861135 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerStarted","Data":"f25e04d1c510833a32aa438356534008fa152ac0df553795c6bbfbdfaa3bf8ce"} Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.868083 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec"} Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.868133 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"013e82ea86934d0790e632687e1ca414b195bafa13bf53f8c6f91235f6891513"} Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.906105 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.906086786 podStartE2EDuration="6.906086786s" podCreationTimestamp="2026-01-21 10:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:51.885131962 +0000 UTC m=+1256.345919560" watchObservedRunningTime="2026-01-21 10:57:51.906086786 +0000 UTC m=+1256.366874384" Jan 21 10:57:51 crc kubenswrapper[4745]: I0121 10:57:51.913176 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podStartSLOduration=41.144244416 podStartE2EDuration="42.913154635s" podCreationTimestamp="2026-01-21 10:57:09 +0000 UTC" firstStartedPulling="2026-01-21 10:57:48.538908106 +0000 UTC m=+1252.999695704" lastFinishedPulling="2026-01-21 10:57:50.307818325 +0000 UTC m=+1254.768605923" observedRunningTime="2026-01-21 10:57:51.908385167 +0000 UTC m=+1256.369172765" watchObservedRunningTime="2026-01-21 10:57:51.913154635 +0000 UTC m=+1256.373942233" Jan 21 10:57:52 crc kubenswrapper[4745]: I0121 10:57:52.878232 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerStarted","Data":"e3a9af177fbd76388849a99670cc03ceb63e58c65a61533b8636f9b26dac0aef"} Jan 21 10:57:52 crc kubenswrapper[4745]: I0121 10:57:52.910144 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.910123766 podStartE2EDuration="7.910123766s" podCreationTimestamp="2026-01-21 10:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:57:52.900686152 +0000 UTC m=+1257.361473770" watchObservedRunningTime="2026-01-21 10:57:52.910123766 +0000 UTC m=+1257.370911364" Jan 21 10:57:53 crc kubenswrapper[4745]: I0121 10:57:53.891344 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerStarted","Data":"2af0a5ec2b846cfbe35cc2f72a035f43328926dc58f9442f1e3a13ba67ae9e42"} Jan 21 10:57:54 crc kubenswrapper[4745]: I0121 10:57:54.901695 4745 generic.go:334] "Generic (PLEG): container finished" podID="be0086c8-abfc-4740-9d81-62eab45e6507" containerID="108ce0cebeefd813918deebb94fde732e4663dab8560edf6a5b7c39d0f458ec8" exitCode=0 Jan 21 10:57:54 crc kubenswrapper[4745]: I0121 10:57:54.901767 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hj5fq" event={"ID":"be0086c8-abfc-4740-9d81-62eab45e6507","Type":"ContainerDied","Data":"108ce0cebeefd813918deebb94fde732e4663dab8560edf6a5b7c39d0f458ec8"} Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.746064 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.746353 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.770198 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.770269 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.805686 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.822134 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.851633 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.879385 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.923093 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.923260 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.923304 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:55 crc kubenswrapper[4745]: I0121 10:57:55.923595 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:57:56 crc kubenswrapper[4745]: I0121 10:57:56.607804 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:57:56 crc kubenswrapper[4745]: I0121 10:57:56.677246 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:57:56 crc kubenswrapper[4745]: I0121 10:57:56.684907 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="dnsmasq-dns" containerID="cri-o://6793da3a559ea662747b2d18ef5f34684874ba791e0597a0b93dae353561cd83" gracePeriod=10 Jan 21 10:57:57 crc kubenswrapper[4745]: I0121 10:57:57.954308 4745 generic.go:334] "Generic (PLEG): container finished" podID="444abf7d-45e7-490e-a1af-5a082b51a3af" containerID="68c5cff5d9b6b515a71000aefd4a7bc7875a3525a7c8e2d6c70c406c3598993e" exitCode=0 Jan 21 10:57:57 crc kubenswrapper[4745]: I0121 10:57:57.954408 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-45lw5" event={"ID":"444abf7d-45e7-490e-a1af-5a082b51a3af","Type":"ContainerDied","Data":"68c5cff5d9b6b515a71000aefd4a7bc7875a3525a7c8e2d6c70c406c3598993e"} Jan 21 10:57:57 crc kubenswrapper[4745]: I0121 10:57:57.957234 4745 generic.go:334] "Generic (PLEG): container finished" podID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerID="6793da3a559ea662747b2d18ef5f34684874ba791e0597a0b93dae353561cd83" exitCode=0 Jan 21 10:57:57 crc kubenswrapper[4745]: I0121 10:57:57.957336 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:57:57 crc kubenswrapper[4745]: I0121 10:57:57.958370 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" event={"ID":"b28e59ca-5792-4c1a-a96c-e6aee4f83026","Type":"ContainerDied","Data":"6793da3a559ea662747b2d18ef5f34684874ba791e0597a0b93dae353561cd83"} Jan 21 10:57:58 crc kubenswrapper[4745]: I0121 10:57:58.514115 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: connect: connection refused" Jan 21 10:57:59 crc kubenswrapper[4745]: I0121 10:57:59.710476 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:59 crc kubenswrapper[4745]: I0121 10:57:59.710755 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:57:59 crc kubenswrapper[4745]: I0121 10:57:59.713961 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.029695 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.030504 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.771282 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.771439 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.774595 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.774703 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:58:00 crc kubenswrapper[4745]: I0121 10:58:00.802518 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 10:58:01 crc kubenswrapper[4745]: I0121 10:58:01.080241 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 10:58:01 crc kubenswrapper[4745]: I0121 10:58:01.933866 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hj5fq" Jan 21 10:58:01 crc kubenswrapper[4745]: I0121 10:58:01.967353 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.002015 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.104048 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.115139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-pd5zm" event={"ID":"b28e59ca-5792-4c1a-a96c-e6aee4f83026","Type":"ContainerDied","Data":"cc6fc8be6b4382aa6e25c8185962b59b810ad42f3af895873301f7047b068ee6"} Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.115278 4745 scope.go:117] "RemoveContainer" containerID="6793da3a559ea662747b2d18ef5f34684874ba791e0597a0b93dae353561cd83" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124033 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts\") pod \"be0086c8-abfc-4740-9d81-62eab45e6507\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124097 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124147 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124200 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data\") pod \"be0086c8-abfc-4740-9d81-62eab45e6507\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124227 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124305 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124340 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztrfq\" (UniqueName: \"kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124397 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124440 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124440 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hj5fq" event={"ID":"be0086c8-abfc-4740-9d81-62eab45e6507","Type":"ContainerDied","Data":"877dcdd850aeedc2b3d6eb12da7a1760363420b3e605b8ff7b3d33d07106c6ed"} Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124473 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="877dcdd850aeedc2b3d6eb12da7a1760363420b3e605b8ff7b3d33d07106c6ed" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.124471 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle\") pod \"be0086c8-abfc-4740-9d81-62eab45e6507\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.152162 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hj5fq" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.166915 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167074 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vxjj9\" (UniqueName: \"kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9\") pod \"be0086c8-abfc-4740-9d81-62eab45e6507\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167153 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle\") pod \"444abf7d-45e7-490e-a1af-5a082b51a3af\" (UID: \"444abf7d-45e7-490e-a1af-5a082b51a3af\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167216 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167249 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs\") pod \"be0086c8-abfc-4740-9d81-62eab45e6507\" (UID: \"be0086c8-abfc-4740-9d81-62eab45e6507\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167295 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.167348 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh724\" (UniqueName: \"kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724\") pod \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\" (UID: \"b28e59ca-5792-4c1a-a96c-e6aee4f83026\") " Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.192046 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-45lw5" event={"ID":"444abf7d-45e7-490e-a1af-5a082b51a3af","Type":"ContainerDied","Data":"4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba"} Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.192095 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d14e376d2038b7e746f6f818a971fda9bec7c9cc41c911741a89c82e18083ba" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.192209 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-45lw5" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.192876 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs" (OuterVolumeSpecName: "logs") pod "be0086c8-abfc-4740-9d81-62eab45e6507" (UID: "be0086c8-abfc-4740-9d81-62eab45e6507"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.231341 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9" (OuterVolumeSpecName: "kube-api-access-vxjj9") pod "be0086c8-abfc-4740-9d81-62eab45e6507" (UID: "be0086c8-abfc-4740-9d81-62eab45e6507"). InnerVolumeSpecName "kube-api-access-vxjj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.236686 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts" (OuterVolumeSpecName: "scripts") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.236753 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724" (OuterVolumeSpecName: "kube-api-access-vh724") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "kube-api-access-vh724". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.273940 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts" (OuterVolumeSpecName: "scripts") pod "be0086c8-abfc-4740-9d81-62eab45e6507" (UID: "be0086c8-abfc-4740-9d81-62eab45e6507"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.288938 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq" (OuterVolumeSpecName: "kube-api-access-ztrfq") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "kube-api-access-ztrfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290194 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290217 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290228 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ztrfq\" (UniqueName: \"kubernetes.io/projected/444abf7d-45e7-490e-a1af-5a082b51a3af-kube-api-access-ztrfq\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290238 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vxjj9\" (UniqueName: \"kubernetes.io/projected/be0086c8-abfc-4740-9d81-62eab45e6507-kube-api-access-vxjj9\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290246 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be0086c8-abfc-4740-9d81-62eab45e6507-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290255 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh724\" (UniqueName: \"kubernetes.io/projected/b28e59ca-5792-4c1a-a96c-e6aee4f83026-kube-api-access-vh724\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290677 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.290693 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.296319 4745 scope.go:117] "RemoveContainer" containerID="186bf3942911ebf7cd088b81b1e7b198a1d05796d499a5e5ae81cf61a35050bf" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.311322 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be0086c8-abfc-4740-9d81-62eab45e6507" (UID: "be0086c8-abfc-4740-9d81-62eab45e6507"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.392427 4745 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.392459 4745 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.392471 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.447034 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data" (OuterVolumeSpecName: "config-data") pod "be0086c8-abfc-4740-9d81-62eab45e6507" (UID: "be0086c8-abfc-4740-9d81-62eab45e6507"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.471737 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.494824 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be0086c8-abfc-4740-9d81-62eab45e6507-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.494865 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.496034 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data" (OuterVolumeSpecName: "config-data") pod "444abf7d-45e7-490e-a1af-5a082b51a3af" (UID: "444abf7d-45e7-490e-a1af-5a082b51a3af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.534544 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.551447 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.577351 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.600614 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/444abf7d-45e7-490e-a1af-5a082b51a3af-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.600651 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.600662 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.600671 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.628192 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config" (OuterVolumeSpecName: "config") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.661048 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b28e59ca-5792-4c1a-a96c-e6aee4f83026" (UID: "b28e59ca-5792-4c1a-a96c-e6aee4f83026"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.704335 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.704389 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b28e59ca-5792-4c1a-a96c-e6aee4f83026-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.794506 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:58:02 crc kubenswrapper[4745]: I0121 10:58:02.818247 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-pd5zm"] Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085036 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-9b7b6cc58-8rqwl"] Jan 21 10:58:03 crc kubenswrapper[4745]: E0121 10:58:03.085396 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="init" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085408 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="init" Jan 21 10:58:03 crc kubenswrapper[4745]: E0121 10:58:03.085430 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="dnsmasq-dns" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085436 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="dnsmasq-dns" Jan 21 10:58:03 crc kubenswrapper[4745]: E0121 10:58:03.085450 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" containerName="placement-db-sync" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085458 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" containerName="placement-db-sync" Jan 21 10:58:03 crc kubenswrapper[4745]: E0121 10:58:03.085470 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444abf7d-45e7-490e-a1af-5a082b51a3af" containerName="keystone-bootstrap" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085476 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="444abf7d-45e7-490e-a1af-5a082b51a3af" containerName="keystone-bootstrap" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085658 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" containerName="placement-db-sync" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085705 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" containerName="dnsmasq-dns" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.085726 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="444abf7d-45e7-490e-a1af-5a082b51a3af" containerName="keystone-bootstrap" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.086628 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.094821 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.095186 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.095309 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4wdzz" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.095418 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.095755 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.143231 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b7b6cc58-8rqwl"] Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.247752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-scripts\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.247829 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-combined-ca-bundle\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.247854 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rd6d\" (UniqueName: \"kubernetes.io/projected/f7dda9f1-400d-40c9-82a4-87b745d91803-kube-api-access-2rd6d\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.247881 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-internal-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.248021 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dda9f1-400d-40c9-82a4-87b745d91803-logs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.248082 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-public-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.248136 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-config-data\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.248638 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-59f65b95fd-mfxld"] Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.249951 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.253425 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.253780 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.253947 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-2rgkp" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.254310 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.255035 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.255253 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.280250 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerStarted","Data":"20fc8f4f4578207bd73276fdefcdab0cecb5f1639fb5b7ecf67953ca2807f9de"} Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.288323 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59f65b95fd-mfxld"] Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.300517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rrpzk" event={"ID":"939e01d6-c378-485e-bd8c-8d394151ef3b","Type":"ContainerStarted","Data":"db6a851847d39f560fd4a3b35de6cbab2e8a942e537c0044e09db3f7cef847ad"} Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.307325 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsql6" event={"ID":"267909cf-90b8-451d-9882-715e44dc2c30","Type":"ContainerStarted","Data":"713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618"} Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.324896 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-rrpzk" podStartSLOduration=5.083975645 podStartE2EDuration="1m10.324881997s" podCreationTimestamp="2026-01-21 10:56:53 +0000 UTC" firstStartedPulling="2026-01-21 10:56:56.529352508 +0000 UTC m=+1200.990140106" lastFinishedPulling="2026-01-21 10:58:01.77025886 +0000 UTC m=+1266.231046458" observedRunningTime="2026-01-21 10:58:03.318644399 +0000 UTC m=+1267.779431997" watchObservedRunningTime="2026-01-21 10:58:03.324881997 +0000 UTC m=+1267.785669595" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.344207 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-tsql6" podStartSLOduration=4.273850658 podStartE2EDuration="1m9.344189186s" podCreationTimestamp="2026-01-21 10:56:54 +0000 UTC" firstStartedPulling="2026-01-21 10:56:56.699989444 +0000 UTC m=+1201.160777042" lastFinishedPulling="2026-01-21 10:58:01.770327972 +0000 UTC m=+1266.231115570" observedRunningTime="2026-01-21 10:58:03.3383794 +0000 UTC m=+1267.799166998" watchObservedRunningTime="2026-01-21 10:58:03.344189186 +0000 UTC m=+1267.804976784" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349645 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxdq6\" (UniqueName: \"kubernetes.io/projected/c1486472-15c0-432f-bca8-cf77403394f9-kube-api-access-rxdq6\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349732 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-scripts\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349757 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-config-data\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349792 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-credential-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-combined-ca-bundle\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-combined-ca-bundle\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349888 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rd6d\" (UniqueName: \"kubernetes.io/projected/f7dda9f1-400d-40c9-82a4-87b745d91803-kube-api-access-2rd6d\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349919 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-public-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-internal-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.349975 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-internal-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.350038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dda9f1-400d-40c9-82a4-87b745d91803-logs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.350056 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-public-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.350133 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-fernet-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.350159 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-config-data\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.350177 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-scripts\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.351087 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7dda9f1-400d-40c9-82a4-87b745d91803-logs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.356462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-public-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.357312 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-config-data\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.359100 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-scripts\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.359206 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-combined-ca-bundle\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.359729 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7dda9f1-400d-40c9-82a4-87b745d91803-internal-tls-certs\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.369248 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rd6d\" (UniqueName: \"kubernetes.io/projected/f7dda9f1-400d-40c9-82a4-87b745d91803-kube-api-access-2rd6d\") pod \"placement-9b7b6cc58-8rqwl\" (UID: \"f7dda9f1-400d-40c9-82a4-87b745d91803\") " pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452224 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-combined-ca-bundle\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452337 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-public-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-internal-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-fernet-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-scripts\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452497 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxdq6\" (UniqueName: \"kubernetes.io/projected/c1486472-15c0-432f-bca8-cf77403394f9-kube-api-access-rxdq6\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-config-data\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.452580 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-credential-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.458768 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-credential-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.458945 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-combined-ca-bundle\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.460136 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-fernet-keys\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.460313 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-scripts\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.461883 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.467487 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-config-data\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.468306 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-internal-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.470966 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1486472-15c0-432f-bca8-cf77403394f9-public-tls-certs\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.477101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxdq6\" (UniqueName: \"kubernetes.io/projected/c1486472-15c0-432f-bca8-cf77403394f9-kube-api-access-rxdq6\") pod \"keystone-59f65b95fd-mfxld\" (UID: \"c1486472-15c0-432f-bca8-cf77403394f9\") " pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:03 crc kubenswrapper[4745]: I0121 10:58:03.574202 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.009790 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28e59ca-5792-4c1a-a96c-e6aee4f83026" path="/var/lib/kubelet/pods/b28e59ca-5792-4c1a-a96c-e6aee4f83026/volumes" Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.064050 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-9b7b6cc58-8rqwl"] Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.216185 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-59f65b95fd-mfxld"] Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.337899 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6x5s4" event={"ID":"9ac43469-c72e-486a-80bf-f6de6bdfa199","Type":"ContainerStarted","Data":"4b89ace2acb3c934a500372662802c3e8ce2acc932ae30ce38cc1d3595500f20"} Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.339963 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7b6cc58-8rqwl" event={"ID":"f7dda9f1-400d-40c9-82a4-87b745d91803","Type":"ContainerStarted","Data":"0a94811fd64f196c4cb9a8adad4ce374459692d4595f8fcd3eb882bb8a138b87"} Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.344693 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59f65b95fd-mfxld" event={"ID":"c1486472-15c0-432f-bca8-cf77403394f9","Type":"ContainerStarted","Data":"dd47b0070054d6744d3381b683959ea685f516bfb59c738bd81c54cc7ddd9526"} Jan 21 10:58:04 crc kubenswrapper[4745]: I0121 10:58:04.414175 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-6x5s4" podStartSLOduration=4.352862933 podStartE2EDuration="1m10.396413382s" podCreationTimestamp="2026-01-21 10:56:54 +0000 UTC" firstStartedPulling="2026-01-21 10:56:56.531287248 +0000 UTC m=+1200.992074846" lastFinishedPulling="2026-01-21 10:58:02.574837697 +0000 UTC m=+1267.035625295" observedRunningTime="2026-01-21 10:58:04.390285628 +0000 UTC m=+1268.851073226" watchObservedRunningTime="2026-01-21 10:58:04.396413382 +0000 UTC m=+1268.857200980" Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.358699 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-59f65b95fd-mfxld" event={"ID":"c1486472-15c0-432f-bca8-cf77403394f9","Type":"ContainerStarted","Data":"9a3d79ba54d2875de13d34ce1b013522efb5d8367e304920df51c9840c4c2687"} Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.359240 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.362086 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7b6cc58-8rqwl" event={"ID":"f7dda9f1-400d-40c9-82a4-87b745d91803","Type":"ContainerStarted","Data":"90de2f887a6c6840cef44cca82cae92f82debcdd833669cd0d19a052f12a1956"} Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.362131 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-9b7b6cc58-8rqwl" event={"ID":"f7dda9f1-400d-40c9-82a4-87b745d91803","Type":"ContainerStarted","Data":"259c623a1eb81c3b19cc999bf777976b3525bae3326dc6f2f0d6268d8444323f"} Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.362725 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.362766 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.379750 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-59f65b95fd-mfxld" podStartSLOduration=2.379731745 podStartE2EDuration="2.379731745s" podCreationTimestamp="2026-01-21 10:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:05.377693071 +0000 UTC m=+1269.838480669" watchObservedRunningTime="2026-01-21 10:58:05.379731745 +0000 UTC m=+1269.840519333" Jan 21 10:58:05 crc kubenswrapper[4745]: I0121 10:58:05.415606 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-9b7b6cc58-8rqwl" podStartSLOduration=2.415367353 podStartE2EDuration="2.415367353s" podCreationTimestamp="2026-01-21 10:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:05.403834784 +0000 UTC m=+1269.864622372" watchObservedRunningTime="2026-01-21 10:58:05.415367353 +0000 UTC m=+1269.876154951" Jan 21 10:58:07 crc kubenswrapper[4745]: E0121 10:58:07.786402 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267909cf_90b8_451d_9882_715e44dc2c30.slice/crio-713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod267909cf_90b8_451d_9882_715e44dc2c30.slice/crio-conmon-713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:58:08 crc kubenswrapper[4745]: I0121 10:58:08.396601 4745 generic.go:334] "Generic (PLEG): container finished" podID="267909cf-90b8-451d-9882-715e44dc2c30" containerID="713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618" exitCode=0 Jan 21 10:58:08 crc kubenswrapper[4745]: I0121 10:58:08.396716 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsql6" event={"ID":"267909cf-90b8-451d-9882-715e44dc2c30","Type":"ContainerDied","Data":"713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618"} Jan 21 10:58:09 crc kubenswrapper[4745]: I0121 10:58:09.407391 4745 generic.go:334] "Generic (PLEG): container finished" podID="939e01d6-c378-485e-bd8c-8d394151ef3b" containerID="db6a851847d39f560fd4a3b35de6cbab2e8a942e537c0044e09db3f7cef847ad" exitCode=0 Jan 21 10:58:09 crc kubenswrapper[4745]: I0121 10:58:09.407455 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rrpzk" event={"ID":"939e01d6-c378-485e-bd8c-8d394151ef3b","Type":"ContainerDied","Data":"db6a851847d39f560fd4a3b35de6cbab2e8a942e537c0044e09db3f7cef847ad"} Jan 21 10:58:09 crc kubenswrapper[4745]: I0121 10:58:09.711134 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:58:10 crc kubenswrapper[4745]: I0121 10:58:10.032155 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 10:58:11 crc kubenswrapper[4745]: I0121 10:58:11.448134 4745 generic.go:334] "Generic (PLEG): container finished" podID="9ac43469-c72e-486a-80bf-f6de6bdfa199" containerID="4b89ace2acb3c934a500372662802c3e8ce2acc932ae30ce38cc1d3595500f20" exitCode=0 Jan 21 10:58:11 crc kubenswrapper[4745]: I0121 10:58:11.448262 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6x5s4" event={"ID":"9ac43469-c72e-486a-80bf-f6de6bdfa199","Type":"ContainerDied","Data":"4b89ace2acb3c934a500372662802c3e8ce2acc932ae30ce38cc1d3595500f20"} Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.064414 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsql6" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.080067 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rrpzk" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.171820 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle\") pod \"267909cf-90b8-451d-9882-715e44dc2c30\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.171953 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data\") pod \"267909cf-90b8-451d-9882-715e44dc2c30\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.172018 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvxcq\" (UniqueName: \"kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq\") pod \"267909cf-90b8-451d-9882-715e44dc2c30\" (UID: \"267909cf-90b8-451d-9882-715e44dc2c30\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.180702 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq" (OuterVolumeSpecName: "kube-api-access-rvxcq") pod "267909cf-90b8-451d-9882-715e44dc2c30" (UID: "267909cf-90b8-451d-9882-715e44dc2c30"). InnerVolumeSpecName "kube-api-access-rvxcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.180833 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "267909cf-90b8-451d-9882-715e44dc2c30" (UID: "267909cf-90b8-451d-9882-715e44dc2c30"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.224644 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "267909cf-90b8-451d-9882-715e44dc2c30" (UID: "267909cf-90b8-451d-9882-715e44dc2c30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.274729 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle\") pod \"939e01d6-c378-485e-bd8c-8d394151ef3b\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.274891 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data\") pod \"939e01d6-c378-485e-bd8c-8d394151ef3b\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.274928 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ddd2\" (UniqueName: \"kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2\") pod \"939e01d6-c378-485e-bd8c-8d394151ef3b\" (UID: \"939e01d6-c378-485e-bd8c-8d394151ef3b\") " Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.275944 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.275985 4745 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/267909cf-90b8-451d-9882-715e44dc2c30-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.276000 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvxcq\" (UniqueName: \"kubernetes.io/projected/267909cf-90b8-451d-9882-715e44dc2c30-kube-api-access-rvxcq\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.279177 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2" (OuterVolumeSpecName: "kube-api-access-6ddd2") pod "939e01d6-c378-485e-bd8c-8d394151ef3b" (UID: "939e01d6-c378-485e-bd8c-8d394151ef3b"). InnerVolumeSpecName "kube-api-access-6ddd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.301749 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "939e01d6-c378-485e-bd8c-8d394151ef3b" (UID: "939e01d6-c378-485e-bd8c-8d394151ef3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.367574 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data" (OuterVolumeSpecName: "config-data") pod "939e01d6-c378-485e-bd8c-8d394151ef3b" (UID: "939e01d6-c378-485e-bd8c-8d394151ef3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.377389 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ddd2\" (UniqueName: \"kubernetes.io/projected/939e01d6-c378-485e-bd8c-8d394151ef3b-kube-api-access-6ddd2\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.377422 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.377432 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/939e01d6-c378-485e-bd8c-8d394151ef3b-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.460302 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-rrpzk" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.461236 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-rrpzk" event={"ID":"939e01d6-c378-485e-bd8c-8d394151ef3b","Type":"ContainerDied","Data":"ec2895213c20acf7ad2fd71deb9dfb145ccd5e99f77af42953bda7c2d615fb15"} Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.461271 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec2895213c20acf7ad2fd71deb9dfb145ccd5e99f77af42953bda7c2d615fb15" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.465827 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-tsql6" event={"ID":"267909cf-90b8-451d-9882-715e44dc2c30","Type":"ContainerDied","Data":"774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf"} Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.465859 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-tsql6" Jan 21 10:58:12 crc kubenswrapper[4745]: I0121 10:58:12.465873 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="774e7041fd1a01f8d74ed89db197a74f049a476e3f81419d1205b6478c2b9dbf" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.463704 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-c9f6db4f9-qq29j"] Jan 21 10:58:13 crc kubenswrapper[4745]: E0121 10:58:13.464349 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="267909cf-90b8-451d-9882-715e44dc2c30" containerName="barbican-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.464363 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="267909cf-90b8-451d-9882-715e44dc2c30" containerName="barbican-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: E0121 10:58:13.464385 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" containerName="heat-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.464393 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" containerName="heat-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.464569 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" containerName="heat-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.464594 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="267909cf-90b8-451d-9882-715e44dc2c30" containerName="barbican-db-sync" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.465484 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.481470 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.482007 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rhfch" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.502819 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-795566bfc4-6vxf4"] Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.504351 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.506875 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.515969 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.525764 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9f6db4f9-qq29j"] Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.571308 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-795566bfc4-6vxf4"] Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.605611 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446eb8df-6f58-43b3-9c04-3741ac0f25a3-logs\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607432 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data-custom\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607453 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-combined-ca-bundle\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607493 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607516 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data-custom\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607553 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607577 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mnbw\" (UniqueName: \"kubernetes.io/projected/393b4909-d9ac-4852-9ccb-495be4b1b265-kube-api-access-8mnbw\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607596 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-combined-ca-bundle\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607621 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393b4909-d9ac-4852-9ccb-495be4b1b265-logs\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.607649 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5c5r\" (UniqueName: \"kubernetes.io/projected/446eb8df-6f58-43b3-9c04-3741ac0f25a3-kube-api-access-q5c5r\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.608483 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.642324 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.708950 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5c5r\" (UniqueName: \"kubernetes.io/projected/446eb8df-6f58-43b3-9c04-3741ac0f25a3-kube-api-access-q5c5r\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709053 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709107 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709156 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446eb8df-6f58-43b3-9c04-3741ac0f25a3-logs\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data-custom\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709747 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-combined-ca-bundle\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709796 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvr7w\" (UniqueName: \"kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709816 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709840 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709859 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data-custom\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709907 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709927 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mnbw\" (UniqueName: \"kubernetes.io/projected/393b4909-d9ac-4852-9ccb-495be4b1b265-kube-api-access-8mnbw\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-combined-ca-bundle\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709964 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.709992 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393b4909-d9ac-4852-9ccb-495be4b1b265-logs\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.710011 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.710081 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/446eb8df-6f58-43b3-9c04-3741ac0f25a3-logs\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.726558 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/393b4909-d9ac-4852-9ccb-495be4b1b265-logs\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.733183 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data-custom\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.748954 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-config-data\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.753342 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-combined-ca-bundle\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.753422 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/393b4909-d9ac-4852-9ccb-495be4b1b265-combined-ca-bundle\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.754323 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.761260 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/446eb8df-6f58-43b3-9c04-3741ac0f25a3-config-data-custom\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.763810 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5c5r\" (UniqueName: \"kubernetes.io/projected/446eb8df-6f58-43b3-9c04-3741ac0f25a3-kube-api-access-q5c5r\") pod \"barbican-keystone-listener-795566bfc4-6vxf4\" (UID: \"446eb8df-6f58-43b3-9c04-3741ac0f25a3\") " pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.766047 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mnbw\" (UniqueName: \"kubernetes.io/projected/393b4909-d9ac-4852-9ccb-495be4b1b265-kube-api-access-8mnbw\") pod \"barbican-worker-c9f6db4f9-qq29j\" (UID: \"393b4909-d9ac-4852-9ccb-495be4b1b265\") " pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.807255 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-c9f6db4f9-qq29j" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811336 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvr7w\" (UniqueName: \"kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811399 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811456 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811512 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811660 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.811710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.812564 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.812967 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.812998 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.813120 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.813706 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.840311 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvr7w\" (UniqueName: \"kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w\") pod \"dnsmasq-dns-85ff748b95-wvtkb\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.846560 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.937496 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:13 crc kubenswrapper[4745]: I0121 10:58:13.963752 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121249 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnskw\" (UniqueName: \"kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121731 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121785 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121848 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121902 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.121922 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data\") pod \"9ac43469-c72e-486a-80bf-f6de6bdfa199\" (UID: \"9ac43469-c72e-486a-80bf-f6de6bdfa199\") " Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.128675 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.144990 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.160433 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.160846 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts" (OuterVolumeSpecName: "scripts") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.165791 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw" (OuterVolumeSpecName: "kube-api-access-dnskw") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "kube-api-access-dnskw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: E0121 10:58:14.167190 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" containerName="cinder-db-sync" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.167213 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" containerName="cinder-db-sync" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.167414 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" containerName="cinder-db-sync" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.168301 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.176231 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.198797 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.237945 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.247839 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.247887 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.247901 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnskw\" (UniqueName: \"kubernetes.io/projected/9ac43469-c72e-486a-80bf-f6de6bdfa199-kube-api-access-dnskw\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.247912 4745 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.247921 4745 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ac43469-c72e-486a-80bf-f6de6bdfa199-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.252209 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data" (OuterVolumeSpecName: "config-data") pod "9ac43469-c72e-486a-80bf-f6de6bdfa199" (UID: "9ac43469-c72e-486a-80bf-f6de6bdfa199"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350331 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350425 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350447 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350578 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7lxm\" (UniqueName: \"kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350597 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.350668 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ac43469-c72e-486a-80bf-f6de6bdfa199-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.452579 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7lxm\" (UniqueName: \"kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.452650 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.452708 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.452778 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.452805 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.453684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.458947 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.459481 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.465403 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.483981 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7lxm\" (UniqueName: \"kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm\") pod \"barbican-api-696b8bdd7d-7slj5\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.508243 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-6x5s4" event={"ID":"9ac43469-c72e-486a-80bf-f6de6bdfa199","Type":"ContainerDied","Data":"2318de45fdbcf53dd26657dd68e7c2b50bcaf2fcc9754e4237f90ad4084d5f81"} Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.508291 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2318de45fdbcf53dd26657dd68e7c2b50bcaf2fcc9754e4237f90ad4084d5f81" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.508302 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-6x5s4" Jan 21 10:58:14 crc kubenswrapper[4745]: I0121 10:58:14.633259 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.315957 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.323443 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.329498 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-km4vv" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.329717 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.329841 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.335929 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.375950 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.480470 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.483728 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.483769 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.483818 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7xxh\" (UniqueName: \"kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.483894 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.483909 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.534014 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.601616 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.601699 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.601775 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.601820 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.601944 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7xxh\" (UniqueName: \"kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.602152 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.602178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.606614 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.619933 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.638569 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.651414 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.651757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.658462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.659556 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7xxh\" (UniqueName: \"kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.690427 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.708658 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.708988 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.709131 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9vxz\" (UniqueName: \"kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.717648 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.717883 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.718024 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.734183 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.805541 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.806885 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.823879 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.854155 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.855122 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9vxz\" (UniqueName: \"kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.855162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.855201 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.855245 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.855349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.856134 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.854982 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.856731 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.857372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.858632 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.860079 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.868003 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.868128 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.868225 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.872291 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.872446 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b" gracePeriod=600 Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.887684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9vxz\" (UniqueName: \"kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz\") pod \"dnsmasq-dns-5c9776ccc5-t4v29\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.956762 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.956805 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.956833 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvmlp\" (UniqueName: \"kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.957152 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.957235 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.957330 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:15 crc kubenswrapper[4745]: I0121 10:58:15.957385 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.006386 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059094 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059184 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059255 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059275 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059299 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvmlp\" (UniqueName: \"kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059377 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059403 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.059807 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.060297 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.072145 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.090277 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.095432 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.113300 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.119830 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvmlp\" (UniqueName: \"kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp\") pod \"cinder-api-0\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.170493 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.354036 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.474739 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-c9f6db4f9-qq29j"] Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.621291 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.629373 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-795566bfc4-6vxf4"] Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.725229 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b" exitCode=0 Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.725449 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b"} Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.726168 4745 scope.go:117] "RemoveContainer" containerID="a809b13ad0c1d2cb669d0700f6bab3b22eddc9ebef1f9677d885d8d6e5615f59" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.756226 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.761681 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerStarted","Data":"27ae94af68d316b001b21eceadb9e0e4b75bba81327b73a5e267be29e635ae41"} Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.761853 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-central-agent" containerID="cri-o://1bad4275339a90422ce7155a52421c1fbe91364387c2059f4b6f3fa7b83a770a" gracePeriod=30 Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.762063 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.762107 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="proxy-httpd" containerID="cri-o://27ae94af68d316b001b21eceadb9e0e4b75bba81327b73a5e267be29e635ae41" gracePeriod=30 Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.762148 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="sg-core" containerID="cri-o://20fc8f4f4578207bd73276fdefcdab0cecb5f1639fb5b7ecf67953ca2807f9de" gracePeriod=30 Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.762182 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-notification-agent" containerID="cri-o://2af0a5ec2b846cfbe35cc2f72a035f43328926dc58f9442f1e3a13ba67ae9e42" gracePeriod=30 Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.792170 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" event={"ID":"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af","Type":"ContainerStarted","Data":"c533a1a9e898a967b50f5c329d25d2f0a740a7771da57cbf39febb3b98797c9b"} Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.799250 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f6db4f9-qq29j" event={"ID":"393b4909-d9ac-4852-9ccb-495be4b1b265","Type":"ContainerStarted","Data":"cdb0b41423ab51d137f131c41f44c6b14d7a95b0ff3309a36ca1b0ff3548f5f4"} Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.862663 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.212037878 podStartE2EDuration="1m22.862630689s" podCreationTimestamp="2026-01-21 10:56:54 +0000 UTC" firstStartedPulling="2026-01-21 10:56:56.753992829 +0000 UTC m=+1201.214780427" lastFinishedPulling="2026-01-21 10:58:15.40458563 +0000 UTC m=+1279.865373238" observedRunningTime="2026-01-21 10:58:16.844668797 +0000 UTC m=+1281.305456395" watchObservedRunningTime="2026-01-21 10:58:16.862630689 +0000 UTC m=+1281.323418287" Jan 21 10:58:16 crc kubenswrapper[4745]: I0121 10:58:16.964228 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.302495 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.371081 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:17 crc kubenswrapper[4745]: W0121 10:58:17.457488 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc12c404b_4d65_4d44_a58f_ab20031237eb.slice/crio-89f3bf582d99284d950e086a24cf1df5b857fdc43c88223a706cb29fad1836b2 WatchSource:0}: Error finding container 89f3bf582d99284d950e086a24cf1df5b857fdc43c88223a706cb29fad1836b2: Status 404 returned error can't find the container with id 89f3bf582d99284d950e086a24cf1df5b857fdc43c88223a706cb29fad1836b2 Jan 21 10:58:17 crc kubenswrapper[4745]: W0121 10:58:17.498792 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode97ae460_8069_4aff_bb90_d1d46d762e05.slice/crio-c5ef13eedc19fc99630eb3bff125bce2a93d42d3e3eaf182eb2ed4ed33f1af65 WatchSource:0}: Error finding container c5ef13eedc19fc99630eb3bff125bce2a93d42d3e3eaf182eb2ed4ed33f1af65: Status 404 returned error can't find the container with id c5ef13eedc19fc99630eb3bff125bce2a93d42d3e3eaf182eb2ed4ed33f1af65 Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.850712 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerStarted","Data":"c2f56f590bb5abeb6765add99ce78aafdd4ae03a09cecd9f41850cba6f42d85f"} Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.855013 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7"} Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.942477 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerID="27ae94af68d316b001b21eceadb9e0e4b75bba81327b73a5e267be29e635ae41" exitCode=0 Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.942974 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerID="20fc8f4f4578207bd73276fdefcdab0cecb5f1639fb5b7ecf67953ca2807f9de" exitCode=2 Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.943084 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerDied","Data":"27ae94af68d316b001b21eceadb9e0e4b75bba81327b73a5e267be29e635ae41"} Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.943161 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerDied","Data":"20fc8f4f4578207bd73276fdefcdab0cecb5f1639fb5b7ecf67953ca2807f9de"} Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.965182 4745 generic.go:334] "Generic (PLEG): container finished" podID="2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" containerID="917b62b370376ae48f035baf46e650dcf7a85cd35c16872231e30e6a33c1836f" exitCode=0 Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.965560 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" event={"ID":"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af","Type":"ContainerDied","Data":"917b62b370376ae48f035baf46e650dcf7a85cd35c16872231e30e6a33c1836f"} Jan 21 10:58:17 crc kubenswrapper[4745]: I0121 10:58:17.989790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" event={"ID":"446eb8df-6f58-43b3-9c04-3741ac0f25a3","Type":"ContainerStarted","Data":"e6efdbfad74e667acdded1b744534ad7152da179abff172793e682606fe070ec"} Jan 21 10:58:18 crc kubenswrapper[4745]: I0121 10:58:18.190058 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerStarted","Data":"c5ef13eedc19fc99630eb3bff125bce2a93d42d3e3eaf182eb2ed4ed33f1af65"} Jan 21 10:58:18 crc kubenswrapper[4745]: I0121 10:58:18.190136 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" event={"ID":"c12c404b-4d65-4d44-a58f-ab20031237eb","Type":"ContainerStarted","Data":"89f3bf582d99284d950e086a24cf1df5b857fdc43c88223a706cb29fad1836b2"} Jan 21 10:58:18 crc kubenswrapper[4745]: I0121 10:58:18.204310 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerStarted","Data":"a574a75a2386cd799d6e1fa68c82341439d080b6605123d97919c6a1cd1339fc"} Jan 21 10:58:18 crc kubenswrapper[4745]: I0121 10:58:18.204371 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerStarted","Data":"bc4a68f4044ff9ffd7acc4a74158267e35ed21be49fcfb00571708bea578d63e"} Jan 21 10:58:18 crc kubenswrapper[4745]: I0121 10:58:18.959126 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049182 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049612 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049630 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049690 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049726 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.049777 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvr7w\" (UniqueName: \"kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w\") pod \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\" (UID: \"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af\") " Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.083955 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w" (OuterVolumeSpecName: "kube-api-access-xvr7w") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "kube-api-access-xvr7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.122412 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.123065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.139777 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config" (OuterVolumeSpecName: "config") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.150603 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.152065 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvr7w\" (UniqueName: \"kubernetes.io/projected/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-kube-api-access-xvr7w\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.152104 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.152113 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.152138 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.169225 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.170127 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.172807 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" (UID: "2c6694eb-ddea-4b3c-bfd1-2db7e1f404af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.253300 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.253331 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.267736 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.267962 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94bcb9f8b-t6knd" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-api" containerID="cri-o://2ea6b68ded7ab89c63c85148a2e1867d4128c69f150a1cccf1017660dd508855" gracePeriod=30 Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.268415 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94bcb9f8b-t6knd" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-httpd" containerID="cri-o://3c0e701504d3132b3c1cbbfec9509408b319cceee1e8dcf2f7a753801a688187" gracePeriod=30 Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.301780 4745 generic.go:334] "Generic (PLEG): container finished" podID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerID="0e2a22281c8f2d3f772fbd38e963863662224c3644969d00c9f9a26f9be4b75e" exitCode=0 Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.301860 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" event={"ID":"c12c404b-4d65-4d44-a58f-ab20031237eb","Type":"ContainerDied","Data":"0e2a22281c8f2d3f772fbd38e963863662224c3644969d00c9f9a26f9be4b75e"} Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.341515 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerStarted","Data":"4123a288d6520e779dfdf9cfa9f95036dcdab26e1b185730d80a066ea1be28c2"} Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.342115 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.342178 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.387510 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerID="1bad4275339a90422ce7155a52421c1fbe91364387c2059f4b6f3fa7b83a770a" exitCode=0 Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.387790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerDied","Data":"1bad4275339a90422ce7155a52421c1fbe91364387c2059f4b6f3fa7b83a770a"} Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.400162 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-696b8bdd7d-7slj5" podStartSLOduration=5.400141467 podStartE2EDuration="5.400141467s" podCreationTimestamp="2026-01-21 10:58:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:19.382915474 +0000 UTC m=+1283.843703072" watchObservedRunningTime="2026-01-21 10:58:19.400141467 +0000 UTC m=+1283.860929065" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.428825 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.430879 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-wvtkb" event={"ID":"2c6694eb-ddea-4b3c-bfd1-2db7e1f404af","Type":"ContainerDied","Data":"c533a1a9e898a967b50f5c329d25d2f0a740a7771da57cbf39febb3b98797c9b"} Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.431025 4745 scope.go:117] "RemoveContainer" containerID="917b62b370376ae48f035baf46e650dcf7a85cd35c16872231e30e6a33c1836f" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.628016 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.655587 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-wvtkb"] Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.711735 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.711801 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.712513 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906"} pod="openstack/horizon-78cb545d88-xv4bf" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 10:58:19 crc kubenswrapper[4745]: I0121 10:58:19.712596 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" containerID="cri-o://db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906" gracePeriod=30 Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.031280 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.036828 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" path="/var/lib/kubelet/pods/2c6694eb-ddea-4b3c-bfd1-2db7e1f404af/volumes" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.442057 4745 generic.go:334] "Generic (PLEG): container finished" podID="89c568b0-5492-496e-a324-93aeb78a82fd" containerID="3c0e701504d3132b3c1cbbfec9509408b319cceee1e8dcf2f7a753801a688187" exitCode=0 Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.442514 4745 generic.go:334] "Generic (PLEG): container finished" podID="89c568b0-5492-496e-a324-93aeb78a82fd" containerID="2ea6b68ded7ab89c63c85148a2e1867d4128c69f150a1cccf1017660dd508855" exitCode=0 Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.442478 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerDied","Data":"3c0e701504d3132b3c1cbbfec9509408b319cceee1e8dcf2f7a753801a688187"} Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.442645 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerDied","Data":"2ea6b68ded7ab89c63c85148a2e1867d4128c69f150a1cccf1017660dd508855"} Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.448746 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerStarted","Data":"4ccecd42bc92369c350ba723ddd6c3ce710951d601f6f8e4f4b626647e21e05a"} Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.460357 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" event={"ID":"c12c404b-4d65-4d44-a58f-ab20031237eb","Type":"ContainerStarted","Data":"9351db559210fdfb90a091b3ee9579b56be5e20ec5e6bfcabd921cfd88bd0aac"} Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.461211 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.478253 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerStarted","Data":"0f40445d45e60c7bd6057f7953a23f67ee000357339fa0fd2ac433360ec42b00"} Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.494275 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" podStartSLOduration=5.494257281 podStartE2EDuration="5.494257281s" podCreationTimestamp="2026-01-21 10:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:20.487829257 +0000 UTC m=+1284.948616845" watchObservedRunningTime="2026-01-21 10:58:20.494257281 +0000 UTC m=+1284.955044869" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.926814 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-657bb6888b-llfnx"] Jan 21 10:58:20 crc kubenswrapper[4745]: E0121 10:58:20.929491 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" containerName="init" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.929656 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" containerName="init" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.929884 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c6694eb-ddea-4b3c-bfd1-2db7e1f404af" containerName="init" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.930877 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.935178 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.935914 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657bb6888b-llfnx"] Jan 21 10:58:20 crc kubenswrapper[4745]: I0121 10:58:20.939068 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017059 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-internal-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data-custom\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017654 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-public-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017763 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-logs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017843 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-combined-ca-bundle\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.017976 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.018063 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdt55\" (UniqueName: \"kubernetes.io/projected/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-kube-api-access-jdt55\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.127763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-internal-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.127927 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data-custom\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.127955 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-public-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.128089 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-logs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.128142 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-combined-ca-bundle\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.128399 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.128468 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdt55\" (UniqueName: \"kubernetes.io/projected/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-kube-api-access-jdt55\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.129961 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-logs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.141169 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-public-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.146933 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-combined-ca-bundle\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.148233 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.151506 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdt55\" (UniqueName: \"kubernetes.io/projected/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-kube-api-access-jdt55\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.153360 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-config-data-custom\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.155336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0da39e8-c4a0-47b5-8427-fb3b731cb0d4-internal-tls-certs\") pod \"barbican-api-657bb6888b-llfnx\" (UID: \"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4\") " pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.289060 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.501034 4745 generic.go:334] "Generic (PLEG): container finished" podID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerID="2af0a5ec2b846cfbe35cc2f72a035f43328926dc58f9442f1e3a13ba67ae9e42" exitCode=0 Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.501602 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerDied","Data":"2af0a5ec2b846cfbe35cc2f72a035f43328926dc58f9442f1e3a13ba67ae9e42"} Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.512437 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerStarted","Data":"f9b7eedc50235dc25085a1c1f6ade4a9ea92a812fcf7bcf2629d6d04a728c760"} Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.512618 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api" containerID="cri-o://f9b7eedc50235dc25085a1c1f6ade4a9ea92a812fcf7bcf2629d6d04a728c760" gracePeriod=30 Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.512740 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api-log" containerID="cri-o://4ccecd42bc92369c350ba723ddd6c3ce710951d601f6f8e4f4b626647e21e05a" gracePeriod=30 Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.513427 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 10:58:21 crc kubenswrapper[4745]: I0121 10:58:21.553519 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.553500815 podStartE2EDuration="6.553500815s" podCreationTimestamp="2026-01-21 10:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:21.536982091 +0000 UTC m=+1285.997769689" watchObservedRunningTime="2026-01-21 10:58:21.553500815 +0000 UTC m=+1286.014288403" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.521919 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.602204 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.602756 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9221596-2fe8-46b3-b699-2360ddbe7dcf","Type":"ContainerDied","Data":"842dc5e4a3ef20996a3954f8915b120bd61ac8984bc15b18992cd2bc9e372c15"} Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.602791 4745 scope.go:117] "RemoveContainer" containerID="27ae94af68d316b001b21eceadb9e0e4b75bba81327b73a5e267be29e635ae41" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.602888 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.638217 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94bcb9f8b-t6knd" event={"ID":"89c568b0-5492-496e-a324-93aeb78a82fd","Type":"ContainerDied","Data":"fef845246fa0e61775de7fcb7b5ed7a1a6024925b7b21f5243472c2aeba30e89"} Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.638306 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94bcb9f8b-t6knd" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.687974 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.693800 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.688972 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerDied","Data":"f9b7eedc50235dc25085a1c1f6ade4a9ea92a812fcf7bcf2629d6d04a728c760"} Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.688943 4745 generic.go:334] "Generic (PLEG): container finished" podID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerID="f9b7eedc50235dc25085a1c1f6ade4a9ea92a812fcf7bcf2629d6d04a728c760" exitCode=0 Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.693935 4745 generic.go:334] "Generic (PLEG): container finished" podID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerID="4ccecd42bc92369c350ba723ddd6c3ce710951d601f6f8e4f4b626647e21e05a" exitCode=143 Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.693965 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerDied","Data":"4ccecd42bc92369c350ba723ddd6c3ce710951d601f6f8e4f4b626647e21e05a"} Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.694044 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lc6\" (UniqueName: \"kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.694085 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.694121 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.694246 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.694292 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts\") pod \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\" (UID: \"a9221596-2fe8-46b3-b699-2360ddbe7dcf\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.695398 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.696240 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.700033 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts" (OuterVolumeSpecName: "scripts") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.700617 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6" (OuterVolumeSpecName: "kube-api-access-m5lc6") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "kube-api-access-m5lc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.722770 4745 scope.go:117] "RemoveContainer" containerID="20fc8f4f4578207bd73276fdefcdab0cecb5f1639fb5b7ecf67953ca2807f9de" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.768189 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.782843 4745 scope.go:117] "RemoveContainer" containerID="2af0a5ec2b846cfbe35cc2f72a035f43328926dc58f9442f1e3a13ba67ae9e42" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.790236 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.795854 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config\") pod \"89c568b0-5492-496e-a324-93aeb78a82fd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.795971 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6ktr\" (UniqueName: \"kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr\") pod \"89c568b0-5492-496e-a324-93aeb78a82fd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796000 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config\") pod \"89c568b0-5492-496e-a324-93aeb78a82fd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796054 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs\") pod \"89c568b0-5492-496e-a324-93aeb78a82fd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796089 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle\") pod \"89c568b0-5492-496e-a324-93aeb78a82fd\" (UID: \"89c568b0-5492-496e-a324-93aeb78a82fd\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796467 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796478 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796488 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796496 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9221596-2fe8-46b3-b699-2360ddbe7dcf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.796504 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5lc6\" (UniqueName: \"kubernetes.io/projected/a9221596-2fe8-46b3-b699-2360ddbe7dcf-kube-api-access-m5lc6\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.825631 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "89c568b0-5492-496e-a324-93aeb78a82fd" (UID: "89c568b0-5492-496e-a324-93aeb78a82fd"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.828622 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr" (OuterVolumeSpecName: "kube-api-access-g6ktr") pod "89c568b0-5492-496e-a324-93aeb78a82fd" (UID: "89c568b0-5492-496e-a324-93aeb78a82fd"). InnerVolumeSpecName "kube-api-access-g6ktr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.856497 4745 scope.go:117] "RemoveContainer" containerID="1bad4275339a90422ce7155a52421c1fbe91364387c2059f4b6f3fa7b83a770a" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897304 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897675 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hvmlp\" (UniqueName: \"kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897701 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897726 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897764 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897808 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.897985 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs\") pod \"e97ae460-8069-4aff-bb90-d1d46d762e05\" (UID: \"e97ae460-8069-4aff-bb90-d1d46d762e05\") " Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.898427 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.898445 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6ktr\" (UniqueName: \"kubernetes.io/projected/89c568b0-5492-496e-a324-93aeb78a82fd-kube-api-access-g6ktr\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.900608 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.900851 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs" (OuterVolumeSpecName: "logs") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.911315 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp" (OuterVolumeSpecName: "kube-api-access-hvmlp") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "kube-api-access-hvmlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.912015 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.921799 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts" (OuterVolumeSpecName: "scripts") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.921840 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.923372 4745 scope.go:117] "RemoveContainer" containerID="3c0e701504d3132b3c1cbbfec9509408b319cceee1e8dcf2f7a753801a688187" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.930658 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config" (OuterVolumeSpecName: "config") pod "89c568b0-5492-496e-a324-93aeb78a82fd" (UID: "89c568b0-5492-496e-a324-93aeb78a82fd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:22 crc kubenswrapper[4745]: I0121 10:58:22.963044 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89c568b0-5492-496e-a324-93aeb78a82fd" (UID: "89c568b0-5492-496e-a324-93aeb78a82fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002450 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002486 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002496 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002505 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002515 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e97ae460-8069-4aff-bb90-d1d46d762e05-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002571 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hvmlp\" (UniqueName: \"kubernetes.io/projected/e97ae460-8069-4aff-bb90-d1d46d762e05-kube-api-access-hvmlp\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002582 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.002590 4745 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e97ae460-8069-4aff-bb90-d1d46d762e05-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.036476 4745 scope.go:117] "RemoveContainer" containerID="2ea6b68ded7ab89c63c85148a2e1867d4128c69f150a1cccf1017660dd508855" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.063135 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data" (OuterVolumeSpecName: "config-data") pod "a9221596-2fe8-46b3-b699-2360ddbe7dcf" (UID: "a9221596-2fe8-46b3-b699-2360ddbe7dcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.078455 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.108933 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9221596-2fe8-46b3-b699-2360ddbe7dcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.108972 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.108149 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data" (OuterVolumeSpecName: "config-data") pod "e97ae460-8069-4aff-bb90-d1d46d762e05" (UID: "e97ae460-8069-4aff-bb90-d1d46d762e05"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.135437 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "89c568b0-5492-496e-a324-93aeb78a82fd" (UID: "89c568b0-5492-496e-a324-93aeb78a82fd"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.154199 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-657bb6888b-llfnx"] Jan 21 10:58:23 crc kubenswrapper[4745]: W0121 10:58:23.168356 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0da39e8_c4a0_47b5_8427_fb3b731cb0d4.slice/crio-aef32ea64b5fe59485959dfbce2de532b9f3425d76b62df5a5d3de5020f7b3d6 WatchSource:0}: Error finding container aef32ea64b5fe59485959dfbce2de532b9f3425d76b62df5a5d3de5020f7b3d6: Status 404 returned error can't find the container with id aef32ea64b5fe59485959dfbce2de532b9f3425d76b62df5a5d3de5020f7b3d6 Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.210599 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e97ae460-8069-4aff-bb90-d1d46d762e05-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.211232 4745 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89c568b0-5492-496e-a324-93aeb78a82fd-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.258353 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.280563 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347236 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347580 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="sg-core" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347594 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="sg-core" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347607 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-api" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347615 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-api" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347627 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api-log" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347633 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api-log" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347642 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="proxy-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347647 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="proxy-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347659 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-notification-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347665 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-notification-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347684 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347690 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347702 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347708 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api" Jan 21 10:58:23 crc kubenswrapper[4745]: E0121 10:58:23.347722 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-central-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347728 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-central-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347878 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="proxy-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347892 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-api" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347906 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-notification-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347914 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api-log" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347922 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" containerName="cinder-api" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347932 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" containerName="neutron-httpd" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347944 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="ceilometer-central-agent" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.347955 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" containerName="sg-core" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.349512 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.360853 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.361219 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.387445 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526060 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmhr\" (UniqueName: \"kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526137 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526253 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526338 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526427 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.526615 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.627929 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628062 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628143 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmhr\" (UniqueName: \"kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628172 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628202 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628236 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.628283 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.629648 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.630055 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.638382 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.639667 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.648747 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.656634 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.674428 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.685369 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmhr\" (UniqueName: \"kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr\") pod \"ceilometer-0\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.687893 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-94bcb9f8b-t6knd"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.715654 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f6db4f9-qq29j" event={"ID":"393b4909-d9ac-4852-9ccb-495be4b1b265","Type":"ContainerStarted","Data":"ddf8b837f36e73e3a8e0997268a2235cb98cc680533dc7daffdb4a6866ea455f"} Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.715708 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-c9f6db4f9-qq29j" event={"ID":"393b4909-d9ac-4852-9ccb-495be4b1b265","Type":"ContainerStarted","Data":"ab5bec43fa4909209b91beb21fe644890b62fce4e5002d19326216ab1c4a6927"} Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.720269 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" event={"ID":"446eb8df-6f58-43b3-9c04-3741ac0f25a3","Type":"ContainerStarted","Data":"64336bfba1f8c113116bbe2f252e0d8c4e75d2fec8c7953ca17fa42d124c2e6c"} Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.727327 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e97ae460-8069-4aff-bb90-d1d46d762e05","Type":"ContainerDied","Data":"c5ef13eedc19fc99630eb3bff125bce2a93d42d3e3eaf182eb2ed4ed33f1af65"} Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.727382 4745 scope.go:117] "RemoveContainer" containerID="f9b7eedc50235dc25085a1c1f6ade4a9ea92a812fcf7bcf2629d6d04a728c760" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.727499 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.742265 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657bb6888b-llfnx" event={"ID":"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4","Type":"ContainerStarted","Data":"aef32ea64b5fe59485959dfbce2de532b9f3425d76b62df5a5d3de5020f7b3d6"} Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.784463 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.795198 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-c9f6db4f9-qq29j" podStartSLOduration=4.84248751 podStartE2EDuration="10.795175468s" podCreationTimestamp="2026-01-21 10:58:13 +0000 UTC" firstStartedPulling="2026-01-21 10:58:16.382765835 +0000 UTC m=+1280.843553433" lastFinishedPulling="2026-01-21 10:58:22.335453793 +0000 UTC m=+1286.796241391" observedRunningTime="2026-01-21 10:58:23.746231401 +0000 UTC m=+1288.207018999" watchObservedRunningTime="2026-01-21 10:58:23.795175468 +0000 UTC m=+1288.255963066" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.871332 4745 scope.go:117] "RemoveContainer" containerID="4ccecd42bc92369c350ba723ddd6c3ce710951d601f6f8e4f4b626647e21e05a" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.883987 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.907825 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.924480 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.928291 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.928415 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.936750 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.937048 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 21 10:58:23 crc kubenswrapper[4745]: I0121 10:58:23.937220 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.043031 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c568b0-5492-496e-a324-93aeb78a82fd" path="/var/lib/kubelet/pods/89c568b0-5492-496e-a324-93aeb78a82fd/volumes" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.043699 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9221596-2fe8-46b3-b699-2360ddbe7dcf" path="/var/lib/kubelet/pods/a9221596-2fe8-46b3-b699-2360ddbe7dcf/volumes" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.050385 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e97ae460-8069-4aff-bb90-d1d46d762e05" path="/var/lib/kubelet/pods/e97ae460-8069-4aff-bb90-d1d46d762e05/volumes" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110166 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110200 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110219 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110246 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110274 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110296 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64hg\" (UniqueName: \"kubernetes.io/projected/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-kube-api-access-g64hg\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110342 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-scripts\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110365 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.110399 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-logs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.211837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.211901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g64hg\" (UniqueName: \"kubernetes.io/projected/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-kube-api-access-g64hg\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.211964 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-scripts\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.211997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.212250 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-logs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.212376 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.212400 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.212437 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.212488 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.215318 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.218775 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-logs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.224603 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data-custom\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.224963 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-config-data\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.225058 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-scripts\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.228734 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.234254 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.235120 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.251337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g64hg\" (UniqueName: \"kubernetes.io/projected/c7a564b0-2da4-4d9c-a8a2-e61604758a1f-kube-api-access-g64hg\") pod \"cinder-api-0\" (UID: \"c7a564b0-2da4-4d9c-a8a2-e61604758a1f\") " pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.331019 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.550076 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.794273 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerStarted","Data":"c1c003ace73ea751aefa1ddfee637b69ad096c2aada9e639051ae41aee917f1c"} Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.827143 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" event={"ID":"446eb8df-6f58-43b3-9c04-3741ac0f25a3","Type":"ContainerStarted","Data":"a85e2f67b4f0d6c15768d74fc4c8ee8713f48fc00b9aa7a8420c7b6b750b431b"} Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.862923 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657bb6888b-llfnx" event={"ID":"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4","Type":"ContainerStarted","Data":"c94733c56d4fa2341e0bb2e0373f12efc531b5f06c124186912c394b9fbc367f"} Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.869159 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerStarted","Data":"c4ef11755a9bcc3f3535db4ef8b01ca42bbf13481e9c40ed2996d6ecb92a60ea"} Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.893872 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-795566bfc4-6vxf4" podStartSLOduration=6.142794047 podStartE2EDuration="11.893843683s" podCreationTimestamp="2026-01-21 10:58:13 +0000 UTC" firstStartedPulling="2026-01-21 10:58:16.671303274 +0000 UTC m=+1281.132090862" lastFinishedPulling="2026-01-21 10:58:22.42235289 +0000 UTC m=+1286.883140498" observedRunningTime="2026-01-21 10:58:24.8754911 +0000 UTC m=+1289.336278688" watchObservedRunningTime="2026-01-21 10:58:24.893843683 +0000 UTC m=+1289.354631281" Jan 21 10:58:24 crc kubenswrapper[4745]: I0121 10:58:24.946686 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=8.559830988 podStartE2EDuration="9.946669912s" podCreationTimestamp="2026-01-21 10:58:15 +0000 UTC" firstStartedPulling="2026-01-21 10:58:17.364896917 +0000 UTC m=+1281.825684515" lastFinishedPulling="2026-01-21 10:58:18.751735841 +0000 UTC m=+1283.212523439" observedRunningTime="2026-01-21 10:58:24.945594244 +0000 UTC m=+1289.406381842" watchObservedRunningTime="2026-01-21 10:58:24.946669912 +0000 UTC m=+1289.407457500" Jan 21 10:58:25 crc kubenswrapper[4745]: W0121 10:58:25.241563 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7a564b0_2da4_4d9c_a8a2_e61604758a1f.slice/crio-f9a430da3e731ebea03aa1332cbfd245917ac5a21f41df9b92ec723349536d4d WatchSource:0}: Error finding container f9a430da3e731ebea03aa1332cbfd245917ac5a21f41df9b92ec723349536d4d: Status 404 returned error can't find the container with id f9a430da3e731ebea03aa1332cbfd245917ac5a21f41df9b92ec723349536d4d Jan 21 10:58:25 crc kubenswrapper[4745]: I0121 10:58:25.245794 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 21 10:58:25 crc kubenswrapper[4745]: I0121 10:58:25.735285 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 10:58:25 crc kubenswrapper[4745]: I0121 10:58:25.738394 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.166:8080/\": dial tcp 10.217.0.166:8080: connect: connection refused" Jan 21 10:58:25 crc kubenswrapper[4745]: I0121 10:58:25.903467 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7a564b0-2da4-4d9c-a8a2-e61604758a1f","Type":"ContainerStarted","Data":"f9a430da3e731ebea03aa1332cbfd245917ac5a21f41df9b92ec723349536d4d"} Jan 21 10:58:26 crc kubenswrapper[4745]: I0121 10:58:26.047877 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:58:26 crc kubenswrapper[4745]: I0121 10:58:26.383251 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:58:26 crc kubenswrapper[4745]: I0121 10:58:26.383606 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="dnsmasq-dns" containerID="cri-o://e2ed3256fd45122a9544514b9b54e03818b3c771d057ec7cad3089e392f53dc6" gracePeriod=10 Jan 21 10:58:26 crc kubenswrapper[4745]: I0121 10:58:26.606107 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.157:5353: connect: connection refused" Jan 21 10:58:26 crc kubenswrapper[4745]: I0121 10:58:26.999105 4745 generic.go:334] "Generic (PLEG): container finished" podID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerID="e2ed3256fd45122a9544514b9b54e03818b3c771d057ec7cad3089e392f53dc6" exitCode=0 Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:26.999643 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" event={"ID":"c501f46e-be57-458d-bb01-a1db3aecbd93","Type":"ContainerDied","Data":"e2ed3256fd45122a9544514b9b54e03818b3c771d057ec7cad3089e392f53dc6"} Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.019212 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7a564b0-2da4-4d9c-a8a2-e61604758a1f","Type":"ContainerStarted","Data":"733b27df748ea7f0c01b34aaea39972279f099bfe7ca89ed9c18312384cf8e35"} Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.050387 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-657bb6888b-llfnx" event={"ID":"e0da39e8-c4a0-47b5-8427-fb3b731cb0d4","Type":"ContainerStarted","Data":"5b6cccefc121a422dbfb2f0a3c6c87ac78e990e3861055905b83d2e3f01a79f1"} Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.050448 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.050464 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.095902 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-657bb6888b-llfnx" podStartSLOduration=7.095885519 podStartE2EDuration="7.095885519s" podCreationTimestamp="2026-01-21 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:27.093367192 +0000 UTC m=+1291.554154790" watchObservedRunningTime="2026-01-21 10:58:27.095885519 +0000 UTC m=+1291.556673117" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.682971 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828295 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828408 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828474 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828567 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828616 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdsn\" (UniqueName: \"kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.828661 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config\") pod \"c501f46e-be57-458d-bb01-a1db3aecbd93\" (UID: \"c501f46e-be57-458d-bb01-a1db3aecbd93\") " Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.851816 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn" (OuterVolumeSpecName: "kube-api-access-rwdsn") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "kube-api-access-rwdsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.932682 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdsn\" (UniqueName: \"kubernetes.io/projected/c501f46e-be57-458d-bb01-a1db3aecbd93-kube-api-access-rwdsn\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:27 crc kubenswrapper[4745]: I0121 10:58:27.947507 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.032498 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.039615 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.039644 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.064003 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.079148 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerStarted","Data":"8573d22bc4e2872ddf249e7dee6cbb87fd3ae6055786237637718874b23e0ce4"} Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.098406 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.098673 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-ld94n" event={"ID":"c501f46e-be57-458d-bb01-a1db3aecbd93","Type":"ContainerDied","Data":"5285de65f9fb63140296bf7ed69edc5b1b973f3abcb7436730bf0f4f63ca7811"} Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.098740 4745 scope.go:117] "RemoveContainer" containerID="e2ed3256fd45122a9544514b9b54e03818b3c771d057ec7cad3089e392f53dc6" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.140915 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.404171 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config" (OuterVolumeSpecName: "config") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.404523 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c501f46e-be57-458d-bb01-a1db3aecbd93" (UID: "c501f46e-be57-458d-bb01-a1db3aecbd93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.416161 4745 scope.go:117] "RemoveContainer" containerID="46f852bed121ee73121cecad77ba2e0f1575fe98982906baca392f6d52f46b57" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.456686 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.456723 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c501f46e-be57-458d-bb01-a1db3aecbd93-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.755418 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.756190 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.763822 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:58:28 crc kubenswrapper[4745]: I0121 10:58:28.784060 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-ld94n"] Jan 21 10:58:29 crc kubenswrapper[4745]: I0121 10:58:29.118056 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"c7a564b0-2da4-4d9c-a8a2-e61604758a1f","Type":"ContainerStarted","Data":"d35fe920d30f5ab9449f89c546e17aae10502acc283d96fed3ff805104e67b12"} Jan 21 10:58:29 crc kubenswrapper[4745]: I0121 10:58:29.118191 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 21 10:58:29 crc kubenswrapper[4745]: I0121 10:58:29.151156 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.151136749 podStartE2EDuration="6.151136749s" podCreationTimestamp="2026-01-21 10:58:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:29.146877974 +0000 UTC m=+1293.607665572" watchObservedRunningTime="2026-01-21 10:58:29.151136749 +0000 UTC m=+1293.611924347" Jan 21 10:58:29 crc kubenswrapper[4745]: I0121 10:58:29.717858 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:29 crc kubenswrapper[4745]: I0121 10:58:29.717892 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:30 crc kubenswrapper[4745]: I0121 10:58:30.028208 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" path="/var/lib/kubelet/pods/c501f46e-be57-458d-bb01-a1db3aecbd93/volumes" Jan 21 10:58:30 crc kubenswrapper[4745]: I0121 10:58:30.978869 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 10:58:31 crc kubenswrapper[4745]: I0121 10:58:31.041066 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:31 crc kubenswrapper[4745]: I0121 10:58:31.133937 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="cinder-scheduler" containerID="cri-o://0f40445d45e60c7bd6057f7953a23f67ee000357339fa0fd2ac433360ec42b00" gracePeriod=30 Jan 21 10:58:31 crc kubenswrapper[4745]: I0121 10:58:31.134029 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="probe" containerID="cri-o://c4ef11755a9bcc3f3535db4ef8b01ca42bbf13481e9c40ed2996d6ecb92a60ea" gracePeriod=30 Jan 21 10:58:31 crc kubenswrapper[4745]: I0121 10:58:31.735458 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:32 crc kubenswrapper[4745]: I0121 10:58:32.023157 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:32 crc kubenswrapper[4745]: I0121 10:58:32.174092 4745 generic.go:334] "Generic (PLEG): container finished" podID="42879d72-308d-4dec-9961-82d3b55e429e" containerID="0f40445d45e60c7bd6057f7953a23f67ee000357339fa0fd2ac433360ec42b00" exitCode=0 Jan 21 10:58:32 crc kubenswrapper[4745]: I0121 10:58:32.174150 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerDied","Data":"0f40445d45e60c7bd6057f7953a23f67ee000357339fa0fd2ac433360ec42b00"} Jan 21 10:58:32 crc kubenswrapper[4745]: I0121 10:58:32.342677 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.248505 4745 generic.go:334] "Generic (PLEG): container finished" podID="42879d72-308d-4dec-9961-82d3b55e429e" containerID="c4ef11755a9bcc3f3535db4ef8b01ca42bbf13481e9c40ed2996d6ecb92a60ea" exitCode=0 Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.249043 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerDied","Data":"c4ef11755a9bcc3f3535db4ef8b01ca42bbf13481e9c40ed2996d6ecb92a60ea"} Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.624668 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-657bb6888b-llfnx" Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.705407 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.705635 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" containerID="cri-o://a574a75a2386cd799d6e1fa68c82341439d080b6605123d97919c6a1cd1339fc" gracePeriod=30 Jan 21 10:58:33 crc kubenswrapper[4745]: I0121 10:58:33.706053 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" containerID="cri-o://4123a288d6520e779dfdf9cfa9f95036dcdab26e1b185730d80a066ea1be28c2" gracePeriod=30 Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.251293 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.273859 4745 generic.go:334] "Generic (PLEG): container finished" podID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerID="a574a75a2386cd799d6e1fa68c82341439d080b6605123d97919c6a1cd1339fc" exitCode=143 Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.274023 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerDied","Data":"a574a75a2386cd799d6e1fa68c82341439d080b6605123d97919c6a1cd1339fc"} Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.280479 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"42879d72-308d-4dec-9961-82d3b55e429e","Type":"ContainerDied","Data":"c2f56f590bb5abeb6765add99ce78aafdd4ae03a09cecd9f41850cba6f42d85f"} Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.280723 4745 scope.go:117] "RemoveContainer" containerID="c4ef11755a9bcc3f3535db4ef8b01ca42bbf13481e9c40ed2996d6ecb92a60ea" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.280633 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.283777 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerStarted","Data":"c43c4b3f23f5c45a5e67dc5b32eb2787da06d335107498ade7c70071ad8d3c70"} Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.332325 4745 scope.go:117] "RemoveContainer" containerID="0f40445d45e60c7bd6057f7953a23f67ee000357339fa0fd2ac433360ec42b00" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354253 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354334 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7xxh\" (UniqueName: \"kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354380 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354432 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354475 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354588 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom\") pod \"42879d72-308d-4dec-9961-82d3b55e429e\" (UID: \"42879d72-308d-4dec-9961-82d3b55e429e\") " Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.354772 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.355058 4745 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/42879d72-308d-4dec-9961-82d3b55e429e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.364987 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh" (OuterVolumeSpecName: "kube-api-access-f7xxh") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "kube-api-access-f7xxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.365459 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts" (OuterVolumeSpecName: "scripts") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.365885 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.462894 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.462930 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.462942 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7xxh\" (UniqueName: \"kubernetes.io/projected/42879d72-308d-4dec-9961-82d3b55e429e-kube-api-access-f7xxh\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.551694 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.568876 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.635644 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data" (OuterVolumeSpecName: "config-data") pod "42879d72-308d-4dec-9961-82d3b55e429e" (UID: "42879d72-308d-4dec-9961-82d3b55e429e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.672126 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/42879d72-308d-4dec-9961-82d3b55e429e-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.936040 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.970159 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.994330 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:34 crc kubenswrapper[4745]: E0121 10:58:34.994923 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="dnsmasq-dns" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.994944 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="dnsmasq-dns" Jan 21 10:58:34 crc kubenswrapper[4745]: E0121 10:58:34.994965 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="probe" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.994971 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="probe" Jan 21 10:58:34 crc kubenswrapper[4745]: E0121 10:58:34.994985 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="init" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.994993 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="init" Jan 21 10:58:34 crc kubenswrapper[4745]: E0121 10:58:34.995004 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="cinder-scheduler" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.995010 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="cinder-scheduler" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.995215 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="cinder-scheduler" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.995233 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="42879d72-308d-4dec-9961-82d3b55e429e" containerName="probe" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.995243 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c501f46e-be57-458d-bb01-a1db3aecbd93" containerName="dnsmasq-dns" Jan 21 10:58:34 crc kubenswrapper[4745]: I0121 10:58:34.996382 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.003398 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.037872 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.037960 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.038815 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec"} pod="openstack/horizon-5cdbfc4d4d-pm6ln" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.038851 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" containerID="cri-o://f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec" gracePeriod=30 Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.082922 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.088730 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.089033 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.089179 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f4e5bfc-8f66-4654-a418-d08193e99884-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.089255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.089365 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.089655 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tllvf\" (UniqueName: \"kubernetes.io/projected/6f4e5bfc-8f66-4654-a418-d08193e99884-kube-api-access-tllvf\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f4e5bfc-8f66-4654-a418-d08193e99884-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204593 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204747 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tllvf\" (UniqueName: \"kubernetes.io/projected/6f4e5bfc-8f66-4654-a418-d08193e99884-kube-api-access-tllvf\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204891 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.204943 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.210679 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6f4e5bfc-8f66-4654-a418-d08193e99884-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.212755 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.215372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-scripts\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.232119 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.234163 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tllvf\" (UniqueName: \"kubernetes.io/projected/6f4e5bfc-8f66-4654-a418-d08193e99884-kube-api-access-tllvf\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.236988 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6f4e5bfc-8f66-4654-a418-d08193e99884-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6f4e5bfc-8f66-4654-a418-d08193e99884\") " pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.339411 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 21 10:58:35 crc kubenswrapper[4745]: I0121 10:58:35.892395 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 21 10:58:36 crc kubenswrapper[4745]: I0121 10:58:36.056461 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42879d72-308d-4dec-9961-82d3b55e429e" path="/var/lib/kubelet/pods/42879d72-308d-4dec-9961-82d3b55e429e/volumes" Jan 21 10:58:36 crc kubenswrapper[4745]: I0121 10:58:36.312272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f4e5bfc-8f66-4654-a418-d08193e99884","Type":"ContainerStarted","Data":"edbef28321ea671ccfbeb3e487d2c9fd83e4ecb26614feb3a57a2ea22aa1f646"} Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.163654 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:39090->10.217.0.165:9311: read: connection reset by peer" Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.163723 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-696b8bdd7d-7slj5" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.165:9311/healthcheck\": read tcp 10.217.0.2:39082->10.217.0.165:9311: read: connection reset by peer" Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.338327 4745 generic.go:334] "Generic (PLEG): container finished" podID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerID="4123a288d6520e779dfdf9cfa9f95036dcdab26e1b185730d80a066ea1be28c2" exitCode=0 Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.338377 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerDied","Data":"4123a288d6520e779dfdf9cfa9f95036dcdab26e1b185730d80a066ea1be28c2"} Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.755393 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.864177 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom\") pod \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.865748 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle\") pod \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.865813 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7lxm\" (UniqueName: \"kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm\") pod \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.865968 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data\") pod \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.866100 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs\") pod \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\" (UID: \"1a7c6d2e-f298-4367-a7e8-3028f67b102c\") " Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.866409 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs" (OuterVolumeSpecName: "logs") pod "1a7c6d2e-f298-4367-a7e8-3028f67b102c" (UID: "1a7c6d2e-f298-4367-a7e8-3028f67b102c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:58:37 crc kubenswrapper[4745]: I0121 10:58:37.866809 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1a7c6d2e-f298-4367-a7e8-3028f67b102c-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.276829 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm" (OuterVolumeSpecName: "kube-api-access-l7lxm") pod "1a7c6d2e-f298-4367-a7e8-3028f67b102c" (UID: "1a7c6d2e-f298-4367-a7e8-3028f67b102c"). InnerVolumeSpecName "kube-api-access-l7lxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.284925 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7lxm\" (UniqueName: \"kubernetes.io/projected/1a7c6d2e-f298-4367-a7e8-3028f67b102c-kube-api-access-l7lxm\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.300469 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1a7c6d2e-f298-4367-a7e8-3028f67b102c" (UID: "1a7c6d2e-f298-4367-a7e8-3028f67b102c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.334447 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="c7a564b0-2da4-4d9c-a8a2-e61604758a1f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.171:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.390362 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.404429 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f4e5bfc-8f66-4654-a418-d08193e99884","Type":"ContainerStarted","Data":"e1c1d3f3ba8f3106719cac5f977b7cca736f041b2b699a207eae3ff1c68c6e7c"} Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.408826 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1a7c6d2e-f298-4367-a7e8-3028f67b102c" (UID: "1a7c6d2e-f298-4367-a7e8-3028f67b102c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.419097 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-696b8bdd7d-7slj5" event={"ID":"1a7c6d2e-f298-4367-a7e8-3028f67b102c","Type":"ContainerDied","Data":"bc4a68f4044ff9ffd7acc4a74158267e35ed21be49fcfb00571708bea578d63e"} Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.419176 4745 scope.go:117] "RemoveContainer" containerID="4123a288d6520e779dfdf9cfa9f95036dcdab26e1b185730d80a066ea1be28c2" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.419400 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-696b8bdd7d-7slj5" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.484675 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data" (OuterVolumeSpecName: "config-data") pod "1a7c6d2e-f298-4367-a7e8-3028f67b102c" (UID: "1a7c6d2e-f298-4367-a7e8-3028f67b102c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.491665 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.491710 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1a7c6d2e-f298-4367-a7e8-3028f67b102c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.495692 4745 scope.go:117] "RemoveContainer" containerID="a574a75a2386cd799d6e1fa68c82341439d080b6605123d97919c6a1cd1339fc" Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.759282 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:38 crc kubenswrapper[4745]: I0121 10:58:38.772552 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-696b8bdd7d-7slj5"] Jan 21 10:58:39 crc kubenswrapper[4745]: I0121 10:58:39.337840 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="c7a564b0-2da4-4d9c-a8a2-e61604758a1f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.171:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 10:58:39 crc kubenswrapper[4745]: I0121 10:58:39.869772 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:39 crc kubenswrapper[4745]: I0121 10:58:39.966575 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-9b7b6cc58-8rqwl" Jan 21 10:58:40 crc kubenswrapper[4745]: I0121 10:58:40.013748 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" path="/var/lib/kubelet/pods/1a7c6d2e-f298-4367-a7e8-3028f67b102c/volumes" Jan 21 10:58:40 crc kubenswrapper[4745]: I0121 10:58:40.475914 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerStarted","Data":"1826683d4247cd6812f5e4e79099848663df91834e590735c3262e4fee85e5a2"} Jan 21 10:58:41 crc kubenswrapper[4745]: I0121 10:58:41.510025 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6f4e5bfc-8f66-4654-a418-d08193e99884","Type":"ContainerStarted","Data":"e8eb4a23f640fbdba82cf39d4c6b712416b0d4b12bb8a5bc8b810f10166c8f62"} Jan 21 10:58:41 crc kubenswrapper[4745]: I0121 10:58:41.548359 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.5483284 podStartE2EDuration="7.5483284s" podCreationTimestamp="2026-01-21 10:58:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:58:41.539709509 +0000 UTC m=+1306.000497107" watchObservedRunningTime="2026-01-21 10:58:41.5483284 +0000 UTC m=+1306.009115998" Jan 21 10:58:41 crc kubenswrapper[4745]: I0121 10:58:41.733119 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-59f65b95fd-mfxld" Jan 21 10:58:42 crc kubenswrapper[4745]: I0121 10:58:42.159766 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.340177 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.646477 4745 generic.go:334] "Generic (PLEG): container finished" podID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerID="f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec" exitCode=0 Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.646579 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerDied","Data":"f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec"} Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.649508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerStarted","Data":"ad89d1e76ebaf9f363bfbf7fccf0b8bf29aef96a45395bd996ddf0c0e3afee97"} Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.653570 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.897725 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 21 10:58:45 crc kubenswrapper[4745]: E0121 10:58:45.898410 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.898484 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" Jan 21 10:58:45 crc kubenswrapper[4745]: E0121 10:58:45.898674 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.898743 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.899019 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api-log" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.899136 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a7c6d2e-f298-4367-a7e8-3028f67b102c" containerName="barbican-api" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.899806 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.902047 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.902336 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.902695 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-km27f" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.909969 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.954568 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config-secret\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.954626 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.954676 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx9vt\" (UniqueName: \"kubernetes.io/projected/a6aca1df-e09e-42d8-8046-be985160f75a-kube-api-access-wx9vt\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:45 crc kubenswrapper[4745]: I0121 10:58:45.954712 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.055995 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx9vt\" (UniqueName: \"kubernetes.io/projected/a6aca1df-e09e-42d8-8046-be985160f75a-kube-api-access-wx9vt\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.056372 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.056649 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config-secret\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.056797 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.057957 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.067004 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-openstack-config-secret\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.068733 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6aca1df-e09e-42d8-8046-be985160f75a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.086327 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx9vt\" (UniqueName: \"kubernetes.io/projected/a6aca1df-e09e-42d8-8046-be985160f75a-kube-api-access-wx9vt\") pod \"openstackclient\" (UID: \"a6aca1df-e09e-42d8-8046-be985160f75a\") " pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.223176 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.660071 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.712512 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=5.784363015 podStartE2EDuration="23.712479543s" podCreationTimestamp="2026-01-21 10:58:23 +0000 UTC" firstStartedPulling="2026-01-21 10:58:24.680044434 +0000 UTC m=+1289.140832032" lastFinishedPulling="2026-01-21 10:58:42.608160962 +0000 UTC m=+1307.068948560" observedRunningTime="2026-01-21 10:58:46.703943164 +0000 UTC m=+1311.164730762" watchObservedRunningTime="2026-01-21 10:58:46.712479543 +0000 UTC m=+1311.173267142" Jan 21 10:58:46 crc kubenswrapper[4745]: I0121 10:58:46.733733 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 21 10:58:47 crc kubenswrapper[4745]: I0121 10:58:47.668896 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a6aca1df-e09e-42d8-8046-be985160f75a","Type":"ContainerStarted","Data":"024cc82e33face25200c94d9bca8bc82e1afe918e2bcffa2c6a26596db960b0b"} Jan 21 10:58:47 crc kubenswrapper[4745]: I0121 10:58:47.671170 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f"} Jan 21 10:58:47 crc kubenswrapper[4745]: I0121 10:58:47.718580 4745 scope.go:117] "RemoveContainer" containerID="91a62e640d26858d3099f470f15ebcb72c7ca298bc08a5860d05bd6ea0b5cd4d" Jan 21 10:58:48 crc kubenswrapper[4745]: I0121 10:58:48.133859 4745 scope.go:117] "RemoveContainer" containerID="479246080d4424bf86032d1faf8d9a989334caf0fdb50c146baebeee93660dfd" Jan 21 10:58:48 crc kubenswrapper[4745]: I0121 10:58:48.259179 4745 scope.go:117] "RemoveContainer" containerID="d1ae8af1ef8d13f7c7dc2041ac7f7b805d4869bb8f5aff815c803c24757321f3" Jan 21 10:58:50 crc kubenswrapper[4745]: I0121 10:58:50.030883 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:58:50 crc kubenswrapper[4745]: I0121 10:58:50.031404 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:58:50 crc kubenswrapper[4745]: I0121 10:58:50.713872 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d2746d8-86a1-412c-8cac-b737fff90886" containerID="db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906" exitCode=137 Jan 21 10:58:50 crc kubenswrapper[4745]: I0121 10:58:50.713921 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerDied","Data":"db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906"} Jan 21 10:58:52 crc kubenswrapper[4745]: I0121 10:58:52.736358 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerStarted","Data":"3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f"} Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.360231 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-84f7d6cccf-pmbj6"] Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.362057 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.364936 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.365039 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.365091 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368120 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-config-data\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368198 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-etc-swift\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pft67\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-kube-api-access-pft67\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368250 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-public-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368313 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-log-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368372 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-internal-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368394 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-combined-ca-bundle\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.368499 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-run-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.398130 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84f7d6cccf-pmbj6"] Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.469941 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-public-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470324 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-log-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470411 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-internal-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-combined-ca-bundle\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470465 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-run-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470560 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-config-data\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470588 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-etc-swift\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.470616 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pft67\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-kube-api-access-pft67\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.471063 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-log-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.473008 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/eb6ab3e8-65c0-4076-8633-485e6f678171-run-httpd\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.520840 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-internal-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.520879 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-config-data\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.520848 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pft67\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-kube-api-access-pft67\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.521346 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-public-tls-certs\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.534912 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6ab3e8-65c0-4076-8633-485e6f678171-combined-ca-bundle\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.551053 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/eb6ab3e8-65c0-4076-8633-485e6f678171-etc-swift\") pod \"swift-proxy-84f7d6cccf-pmbj6\" (UID: \"eb6ab3e8-65c0-4076-8633-485e6f678171\") " pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:56 crc kubenswrapper[4745]: I0121 10:58:56.696779 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.710767 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.711369 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.763687 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.764255 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-central-agent" containerID="cri-o://8573d22bc4e2872ddf249e7dee6cbb87fd3ae6055786237637718874b23e0ce4" gracePeriod=30 Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.764812 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="proxy-httpd" containerID="cri-o://ad89d1e76ebaf9f363bfbf7fccf0b8bf29aef96a45395bd996ddf0c0e3afee97" gracePeriod=30 Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.764911 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-notification-agent" containerID="cri-o://c43c4b3f23f5c45a5e67dc5b32eb2787da06d335107498ade7c70071ad8d3c70" gracePeriod=30 Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.764956 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="sg-core" containerID="cri-o://1826683d4247cd6812f5e4e79099848663df91834e590735c3262e4fee85e5a2" gracePeriod=30 Jan 21 10:58:59 crc kubenswrapper[4745]: I0121 10:58:59.778546 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.051762 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 10:59:00 crc kubenswrapper[4745]: W0121 10:59:00.462585 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb6ab3e8_65c0_4076_8633_485e6f678171.slice/crio-fa7bbcb2c279dad31842f299c07ebbfa4c86f6d01f3d98d815990b41d5055551 WatchSource:0}: Error finding container fa7bbcb2c279dad31842f299c07ebbfa4c86f6d01f3d98d815990b41d5055551: Status 404 returned error can't find the container with id fa7bbcb2c279dad31842f299c07ebbfa4c86f6d01f3d98d815990b41d5055551 Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.470345 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-84f7d6cccf-pmbj6"] Jan 21 10:59:00 crc kubenswrapper[4745]: E0121 10:59:00.831292 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 21 10:59:00 crc kubenswrapper[4745]: E0121 10:59:00.831454 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd9h548h7h655h68fh5f9h56fh685h5fdh59fh8ch647h55ch67ch67bh67hd4h679hd6h675h9dh5d8h5b7h77h75h6dh65bhcch67bhf8h9bhccq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx9vt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(a6aca1df-e09e-42d8-8046-be985160f75a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 10:59:00 crc kubenswrapper[4745]: E0121 10:59:00.833768 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="a6aca1df-e09e-42d8-8046-be985160f75a" Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835046 4745 generic.go:334] "Generic (PLEG): container finished" podID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerID="ad89d1e76ebaf9f363bfbf7fccf0b8bf29aef96a45395bd996ddf0c0e3afee97" exitCode=0 Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835072 4745 generic.go:334] "Generic (PLEG): container finished" podID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerID="1826683d4247cd6812f5e4e79099848663df91834e590735c3262e4fee85e5a2" exitCode=2 Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835082 4745 generic.go:334] "Generic (PLEG): container finished" podID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerID="8573d22bc4e2872ddf249e7dee6cbb87fd3ae6055786237637718874b23e0ce4" exitCode=0 Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835143 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerDied","Data":"ad89d1e76ebaf9f363bfbf7fccf0b8bf29aef96a45395bd996ddf0c0e3afee97"} Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835223 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerDied","Data":"1826683d4247cd6812f5e4e79099848663df91834e590735c3262e4fee85e5a2"} Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.835236 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerDied","Data":"8573d22bc4e2872ddf249e7dee6cbb87fd3ae6055786237637718874b23e0ce4"} Jan 21 10:59:00 crc kubenswrapper[4745]: I0121 10:59:00.843539 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" event={"ID":"eb6ab3e8-65c0-4076-8633-485e6f678171","Type":"ContainerStarted","Data":"fa7bbcb2c279dad31842f299c07ebbfa4c86f6d01f3d98d815990b41d5055551"} Jan 21 10:59:01 crc kubenswrapper[4745]: I0121 10:59:01.854164 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" event={"ID":"eb6ab3e8-65c0-4076-8633-485e6f678171","Type":"ContainerStarted","Data":"b6acf88ca4c98adc9a7afffa0f79692832ae594cff367131e524762252367460"} Jan 21 10:59:01 crc kubenswrapper[4745]: E0121 10:59:01.855510 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="a6aca1df-e09e-42d8-8046-be985160f75a" Jan 21 10:59:03 crc kubenswrapper[4745]: I0121 10:59:03.878130 4745 generic.go:334] "Generic (PLEG): container finished" podID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerID="c43c4b3f23f5c45a5e67dc5b32eb2787da06d335107498ade7c70071ad8d3c70" exitCode=0 Jan 21 10:59:03 crc kubenswrapper[4745]: I0121 10:59:03.878181 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerDied","Data":"c43c4b3f23f5c45a5e67dc5b32eb2787da06d335107498ade7c70071ad8d3c70"} Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.672248 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847488 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847584 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847637 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847661 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847779 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847821 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmhr\" (UniqueName: \"kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.847892 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle\") pod \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\" (UID: \"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c\") " Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.851714 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.852287 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.876909 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr" (OuterVolumeSpecName: "kube-api-access-jcmhr") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "kube-api-access-jcmhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.899781 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts" (OuterVolumeSpecName: "scripts") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.908183 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" event={"ID":"eb6ab3e8-65c0-4076-8633-485e6f678171","Type":"ContainerStarted","Data":"997a821a5a262b9a41bf33bf7d9c2fde915f65432a89771e7010c475d126ec03"} Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.909797 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.909835 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.942635 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.943479 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f12a2e8c-926a-4eb0-b638-3a3fc07ff21c","Type":"ContainerDied","Data":"c1c003ace73ea751aefa1ddfee637b69ad096c2aada9e639051ae41aee917f1c"} Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.943560 4745 scope.go:117] "RemoveContainer" containerID="ad89d1e76ebaf9f363bfbf7fccf0b8bf29aef96a45395bd996ddf0c0e3afee97" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.943768 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.954770 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.954968 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.955054 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.955117 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmhr\" (UniqueName: \"kubernetes.io/projected/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-kube-api-access-jcmhr\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.955198 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.964704 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" podStartSLOduration=8.964683307 podStartE2EDuration="8.964683307s" podCreationTimestamp="2026-01-21 10:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:04.943115886 +0000 UTC m=+1329.403903484" watchObservedRunningTime="2026-01-21 10:59:04.964683307 +0000 UTC m=+1329.425470905" Jan 21 10:59:04 crc kubenswrapper[4745]: I0121 10:59:04.987566 4745 scope.go:117] "RemoveContainer" containerID="1826683d4247cd6812f5e4e79099848663df91834e590735c3262e4fee85e5a2" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.019718 4745 scope.go:117] "RemoveContainer" containerID="c43c4b3f23f5c45a5e67dc5b32eb2787da06d335107498ade7c70071ad8d3c70" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.056476 4745 scope.go:117] "RemoveContainer" containerID="8573d22bc4e2872ddf249e7dee6cbb87fd3ae6055786237637718874b23e0ce4" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.079357 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.122727 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data" (OuterVolumeSpecName: "config-data") pod "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" (UID: "f12a2e8c-926a-4eb0-b638-3a3fc07ff21c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.159280 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.159564 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.278049 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.290756 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319086 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:05 crc kubenswrapper[4745]: E0121 10:59:05.319439 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="proxy-httpd" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319455 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="proxy-httpd" Jan 21 10:59:05 crc kubenswrapper[4745]: E0121 10:59:05.319473 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="sg-core" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319480 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="sg-core" Jan 21 10:59:05 crc kubenswrapper[4745]: E0121 10:59:05.319495 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-central-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319501 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-central-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: E0121 10:59:05.319517 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-notification-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319523 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-notification-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319695 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="sg-core" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319711 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-central-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319718 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="proxy-httpd" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.319727 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" containerName="ceilometer-notification-agent" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.321215 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.330786 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.330863 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.335317 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464298 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464731 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464789 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464867 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464890 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464923 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.464954 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgl2\" (UniqueName: \"kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566065 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566138 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566181 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566222 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566239 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566254 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.566279 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzgl2\" (UniqueName: \"kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.567097 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.567395 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.571319 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.571717 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.575591 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.578947 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.608277 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzgl2\" (UniqueName: \"kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2\") pod \"ceilometer-0\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.642206 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.875383 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.876943 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.882736 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.895003 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.895186 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-slsds" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.960160 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.979645 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.979743 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.979812 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:05 crc kubenswrapper[4745]: I0121 10:59:05.979859 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74pz6\" (UniqueName: \"kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.048159 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f12a2e8c-926a-4eb0-b638-3a3fc07ff21c" path="/var/lib/kubelet/pods/f12a2e8c-926a-4eb0-b638-3a3fc07ff21c/volumes" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.076689 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.078123 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101280 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101300 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101317 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101347 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68b5t\" (UniqueName: \"kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101385 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101404 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74pz6\" (UniqueName: \"kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101495 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.101576 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.106138 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.117361 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.117982 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.126643 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.141722 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74pz6\" (UniqueName: \"kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6\") pod \"heat-engine-58b4779467-f9wqf\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.204583 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.204642 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.204678 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68b5t\" (UniqueName: \"kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.205743 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.205942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.205962 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.206178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.206769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.206948 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.208227 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.209712 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.219234 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" podUID="eb6ab3e8-65c0-4076-8633-485e6f678171" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.237336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68b5t\" (UniqueName: \"kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t\") pod \"dnsmasq-dns-7756b9d78c-lf2zv\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.252831 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.525634 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.684316 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.715623 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.716793 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.737511 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.739354 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.756595 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdgt2\" (UniqueName: \"kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.756982 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.757083 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.757125 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.758250 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.776976 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.785405 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.822758 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.839131 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" podUID="eb6ab3e8-65c0-4076-8633-485e6f678171" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.858471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdgt2\" (UniqueName: \"kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.858561 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.858612 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.858643 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.868459 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.870682 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.871514 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.878792 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdgt2\" (UniqueName: \"kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2\") pod \"heat-api-586848db89-qxdqf\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.959381 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.963810 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.969783 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.970408 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dl2z\" (UniqueName: \"kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:06 crc kubenswrapper[4745]: I0121 10:59:06.970870 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.040790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerStarted","Data":"3ef6968f9a59306c5b27f87fdbbb66908e688b8b486f9fba8dd807188abe0b06"} Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.043216 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58b4779467-f9wqf" event={"ID":"bf1d009d-bd84-435d-aeb4-8bf435eeea50","Type":"ContainerStarted","Data":"97c54e8780fd5d0b5ce873cfda79cd1eccbf67f168d974e81ff108c5138578e2"} Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.054276 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" podUID="eb6ab3e8-65c0-4076-8633-485e6f678171" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.073498 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.075013 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.075581 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dl2z\" (UniqueName: \"kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.076558 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.116202 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.117386 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.117648 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.120605 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dl2z\" (UniqueName: \"kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.121048 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data\") pod \"heat-cfnapi-857f5f7474-w59t2\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.140882 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.294881 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 10:59:07 crc kubenswrapper[4745]: I0121 10:59:07.894028 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.039712 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.076152 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-586848db89-qxdqf" event={"ID":"b4549eb9-0d8a-4d5a-9375-519f740f36ed","Type":"ContainerStarted","Data":"773cf1984bdc3228387794739863e8eefe1b8c419d0f2418207fca6447547496"} Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.080626 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-857f5f7474-w59t2" event={"ID":"5db7838a-7e98-4345-b958-56cfae3c59e7","Type":"ContainerStarted","Data":"b7886ce9f5bad90d7390cf796e9dc8859573fce96582635a6e49f3abba9223d2"} Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.087969 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerStarted","Data":"049dbaf16ce9d5abe72313c217702d27c35494f9a09200c33599820b8794d98a"} Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.088025 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerStarted","Data":"290de00e3da7e5ffdae163325ae16659a45d1edced39895dd9419f48c9cc2ea1"} Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.099794 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58b4779467-f9wqf" event={"ID":"bf1d009d-bd84-435d-aeb4-8bf435eeea50","Type":"ContainerStarted","Data":"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee"} Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.100681 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:08 crc kubenswrapper[4745]: I0121 10:59:08.129251 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-58b4779467-f9wqf" podStartSLOduration=3.129235557 podStartE2EDuration="3.129235557s" podCreationTimestamp="2026-01-21 10:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:08.128681873 +0000 UTC m=+1332.589469471" watchObservedRunningTime="2026-01-21 10:59:08.129235557 +0000 UTC m=+1332.590023155" Jan 21 10:59:09 crc kubenswrapper[4745]: I0121 10:59:09.117021 4745 generic.go:334] "Generic (PLEG): container finished" podID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerID="049dbaf16ce9d5abe72313c217702d27c35494f9a09200c33599820b8794d98a" exitCode=0 Jan 21 10:59:09 crc kubenswrapper[4745]: I0121 10:59:09.117384 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerDied","Data":"049dbaf16ce9d5abe72313c217702d27c35494f9a09200c33599820b8794d98a"} Jan 21 10:59:09 crc kubenswrapper[4745]: I0121 10:59:09.127512 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerStarted","Data":"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58"} Jan 21 10:59:09 crc kubenswrapper[4745]: I0121 10:59:09.713670 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:59:10 crc kubenswrapper[4745]: I0121 10:59:10.031747 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 10:59:10 crc kubenswrapper[4745]: I0121 10:59:10.158465 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerStarted","Data":"700596e49bad937b21e0b51081168e64e99fd5d4dd81f900c89cf80b9cbc9a60"} Jan 21 10:59:10 crc kubenswrapper[4745]: I0121 10:59:10.158909 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:10 crc kubenswrapper[4745]: I0121 10:59:10.213819 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" podStartSLOduration=4.213803145 podStartE2EDuration="4.213803145s" podCreationTimestamp="2026-01-21 10:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:10.213405664 +0000 UTC m=+1334.674193262" watchObservedRunningTime="2026-01-21 10:59:10.213803145 +0000 UTC m=+1334.674590743" Jan 21 10:59:11 crc kubenswrapper[4745]: I0121 10:59:11.344313 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 10:59:11 crc kubenswrapper[4745]: I0121 10:59:11.709327 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:59:11 crc kubenswrapper[4745]: I0121 10:59:11.715991 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-84f7d6cccf-pmbj6" Jan 21 10:59:12 crc kubenswrapper[4745]: I0121 10:59:12.192858 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerStarted","Data":"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51"} Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.215144 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerStarted","Data":"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64"} Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.216919 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-586848db89-qxdqf" event={"ID":"b4549eb9-0d8a-4d5a-9375-519f740f36ed","Type":"ContainerStarted","Data":"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2"} Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.217146 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.218119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-857f5f7474-w59t2" event={"ID":"5db7838a-7e98-4345-b958-56cfae3c59e7","Type":"ContainerStarted","Data":"e9dbd15b7055d340939a07062c7769bc063e9533659fc5f255ec4576988e4839"} Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.218295 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.271782 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-586848db89-qxdqf" podStartSLOduration=3.23231956 podStartE2EDuration="8.27176433s" podCreationTimestamp="2026-01-21 10:59:06 +0000 UTC" firstStartedPulling="2026-01-21 10:59:08.0490409 +0000 UTC m=+1332.509828498" lastFinishedPulling="2026-01-21 10:59:13.08848567 +0000 UTC m=+1337.549273268" observedRunningTime="2026-01-21 10:59:14.241272031 +0000 UTC m=+1338.702059649" watchObservedRunningTime="2026-01-21 10:59:14.27176433 +0000 UTC m=+1338.732551928" Jan 21 10:59:14 crc kubenswrapper[4745]: I0121 10:59:14.274515 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-857f5f7474-w59t2" podStartSLOduration=3.077840987 podStartE2EDuration="8.274507974s" podCreationTimestamp="2026-01-21 10:59:06 +0000 UTC" firstStartedPulling="2026-01-21 10:59:07.889043068 +0000 UTC m=+1332.349830666" lastFinishedPulling="2026-01-21 10:59:13.085710055 +0000 UTC m=+1337.546497653" observedRunningTime="2026-01-21 10:59:14.268640356 +0000 UTC m=+1338.729427954" watchObservedRunningTime="2026-01-21 10:59:14.274507974 +0000 UTC m=+1338.735295572" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.231512 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"a6aca1df-e09e-42d8-8046-be985160f75a","Type":"ContainerStarted","Data":"5e05a93f8ba75c8d1a4544d2fc9e0ca1b53affb6e138af14dc6c67070180d29c"} Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.249125 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.15145169 podStartE2EDuration="30.249108583s" podCreationTimestamp="2026-01-21 10:58:45 +0000 UTC" firstStartedPulling="2026-01-21 10:58:46.738675738 +0000 UTC m=+1311.199463336" lastFinishedPulling="2026-01-21 10:59:13.836332631 +0000 UTC m=+1338.297120229" observedRunningTime="2026-01-21 10:59:15.247690385 +0000 UTC m=+1339.708477983" watchObservedRunningTime="2026-01-21 10:59:15.249108583 +0000 UTC m=+1339.709896181" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.441468 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.442884 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.490139 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-67ddbd4cb4-nt52k"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.491281 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.507959 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.523017 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.526767 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.585901 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data-custom\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.585949 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.585971 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2ql8\" (UniqueName: \"kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586009 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586035 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-combined-ca-bundle\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzjfn\" (UniqueName: \"kubernetes.io/projected/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-kube-api-access-pzjfn\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586075 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586111 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586133 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586162 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szmxw\" (UniqueName: \"kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586184 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.586203 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.603896 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67ddbd4cb4-nt52k"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.614610 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.690998 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691063 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szmxw\" (UniqueName: \"kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691088 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691114 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data-custom\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691182 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691206 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2ql8\" (UniqueName: \"kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691252 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691284 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-combined-ca-bundle\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691316 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzjfn\" (UniqueName: \"kubernetes.io/projected/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-kube-api-access-pzjfn\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691334 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.691369 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.709234 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.711386 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.711464 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.712146 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-combined-ca-bundle\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.712568 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.713490 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.715310 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.723227 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2ql8\" (UniqueName: \"kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8\") pod \"heat-cfnapi-8664d6b777-4qd85\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.725557 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.733325 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzjfn\" (UniqueName: \"kubernetes.io/projected/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-kube-api-access-pzjfn\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.733957 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szmxw\" (UniqueName: \"kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw\") pod \"heat-api-8467cf6f5c-snz6t\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.760381 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e-config-data-custom\") pod \"heat-engine-67ddbd4cb4-nt52k\" (UID: \"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e\") " pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.771671 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.812554 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:15 crc kubenswrapper[4745]: I0121 10:59:15.852650 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.245664 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerStarted","Data":"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07"} Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.245948 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.286344 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.077724643 podStartE2EDuration="11.286309396s" podCreationTimestamp="2026-01-21 10:59:05 +0000 UTC" firstStartedPulling="2026-01-21 10:59:06.812484868 +0000 UTC m=+1331.273272466" lastFinishedPulling="2026-01-21 10:59:15.021069621 +0000 UTC m=+1339.481857219" observedRunningTime="2026-01-21 10:59:16.272986557 +0000 UTC m=+1340.733774145" watchObservedRunningTime="2026-01-21 10:59:16.286309396 +0000 UTC m=+1340.747096994" Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.530862 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.543725 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.628153 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-67ddbd4cb4-nt52k"] Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.707974 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.708282 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="dnsmasq-dns" containerID="cri-o://9351db559210fdfb90a091b3ee9579b56be5e20ec5e6bfcabd921cfd88bd0aac" gracePeriod=10 Jan 21 10:59:16 crc kubenswrapper[4745]: I0121 10:59:16.834872 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.280182 4745 generic.go:334] "Generic (PLEG): container finished" podID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerID="9351db559210fdfb90a091b3ee9579b56be5e20ec5e6bfcabd921cfd88bd0aac" exitCode=0 Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.288631 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" event={"ID":"c12c404b-4d65-4d44-a58f-ab20031237eb","Type":"ContainerDied","Data":"9351db559210fdfb90a091b3ee9579b56be5e20ec5e6bfcabd921cfd88bd0aac"} Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.288692 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67ddbd4cb4-nt52k" event={"ID":"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e","Type":"ContainerStarted","Data":"40dc98d94c70b90c486b2ef133ec750779e9db28da1e5e78a5612db2b177cdb4"} Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.293436 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8664d6b777-4qd85" event={"ID":"0e55f349-abda-42f1-aa42-80e2169fbd6d","Type":"ContainerStarted","Data":"673784124bd1e0ec0d20f2c0fff928686b398011684e064595672ac4f4f48d67"} Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.295312 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerStarted","Data":"adfde8e8a47443aa808c2f3a8ab69422532a98d3edbc13c3fc3e583eaa45088a"} Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.588948 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.705832 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.705875 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.705908 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.705935 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9vxz\" (UniqueName: \"kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.705954 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.706008 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb\") pod \"c12c404b-4d65-4d44-a58f-ab20031237eb\" (UID: \"c12c404b-4d65-4d44-a58f-ab20031237eb\") " Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.742789 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz" (OuterVolumeSpecName: "kube-api-access-l9vxz") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "kube-api-access-l9vxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.808517 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9vxz\" (UniqueName: \"kubernetes.io/projected/c12c404b-4d65-4d44-a58f-ab20031237eb-kube-api-access-l9vxz\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.828166 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.831366 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.852612 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.859031 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config" (OuterVolumeSpecName: "config") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.863242 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c12c404b-4d65-4d44-a58f-ab20031237eb" (UID: "c12c404b-4d65-4d44-a58f-ab20031237eb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.910684 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-config\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.910719 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.910731 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.910739 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:17 crc kubenswrapper[4745]: I0121 10:59:17.910751 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c12c404b-4d65-4d44-a58f-ab20031237eb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.304800 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" event={"ID":"c12c404b-4d65-4d44-a58f-ab20031237eb","Type":"ContainerDied","Data":"89f3bf582d99284d950e086a24cf1df5b857fdc43c88223a706cb29fad1836b2"} Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.304857 4745 scope.go:117] "RemoveContainer" containerID="9351db559210fdfb90a091b3ee9579b56be5e20ec5e6bfcabd921cfd88bd0aac" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.304821 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4v29" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.306429 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-67ddbd4cb4-nt52k" event={"ID":"d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e","Type":"ContainerStarted","Data":"ef0cec86841895deaf6f6617c8cc168aa8ff44c46c971ffe77f3c085a5260d7c"} Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.306569 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.309924 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8664d6b777-4qd85" event={"ID":"0e55f349-abda-42f1-aa42-80e2169fbd6d","Type":"ContainerStarted","Data":"f09166449549acf25e3f8d9645ecbc9653e69dc554d67f522cd2192c4c2fb2fc"} Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.310725 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.314641 4745 generic.go:334] "Generic (PLEG): container finished" podID="82af10a4-9afe-4316-a938-53633e6e0889" containerID="985ebc79172b9d1d7d84df1c7d94646278cb7e581faefc497749504953786087" exitCode=1 Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.314698 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerDied","Data":"985ebc79172b9d1d7d84df1c7d94646278cb7e581faefc497749504953786087"} Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.315246 4745 scope.go:117] "RemoveContainer" containerID="985ebc79172b9d1d7d84df1c7d94646278cb7e581faefc497749504953786087" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.342811 4745 scope.go:117] "RemoveContainer" containerID="0e2a22281c8f2d3f772fbd38e963863662224c3644969d00c9f9a26f9be4b75e" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.358044 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-67ddbd4cb4-nt52k" podStartSLOduration=3.358023867 podStartE2EDuration="3.358023867s" podCreationTimestamp="2026-01-21 10:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:18.333122247 +0000 UTC m=+1342.793909845" watchObservedRunningTime="2026-01-21 10:59:18.358023867 +0000 UTC m=+1342.818811465" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.380724 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.392523 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4v29"] Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.409305 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8664d6b777-4qd85" podStartSLOduration=3.409285905 podStartE2EDuration="3.409285905s" podCreationTimestamp="2026-01-21 10:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:18.398201097 +0000 UTC m=+1342.858988695" watchObservedRunningTime="2026-01-21 10:59:18.409285905 +0000 UTC m=+1342.870073503" Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.981058 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.981323 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-central-agent" containerID="cri-o://c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58" gracePeriod=30 Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.981411 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-notification-agent" containerID="cri-o://c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51" gracePeriod=30 Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.981413 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="sg-core" containerID="cri-o://11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64" gracePeriod=30 Jan 21 10:59:18 crc kubenswrapper[4745]: I0121 10:59:18.981438 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="proxy-httpd" containerID="cri-o://7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07" gracePeriod=30 Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.355035 4745 generic.go:334] "Generic (PLEG): container finished" podID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerID="f09166449549acf25e3f8d9645ecbc9653e69dc554d67f522cd2192c4c2fb2fc" exitCode=1 Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.355970 4745 scope.go:117] "RemoveContainer" containerID="f09166449549acf25e3f8d9645ecbc9653e69dc554d67f522cd2192c4c2fb2fc" Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.356495 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8664d6b777-4qd85" event={"ID":"0e55f349-abda-42f1-aa42-80e2169fbd6d","Type":"ContainerDied","Data":"f09166449549acf25e3f8d9645ecbc9653e69dc554d67f522cd2192c4c2fb2fc"} Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.375033 4745 generic.go:334] "Generic (PLEG): container finished" podID="82af10a4-9afe-4316-a938-53633e6e0889" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" exitCode=1 Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.375724 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:19 crc kubenswrapper[4745]: E0121 10:59:19.375931 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8467cf6f5c-snz6t_openstack(82af10a4-9afe-4316-a938-53633e6e0889)\"" pod="openstack/heat-api-8467cf6f5c-snz6t" podUID="82af10a4-9afe-4316-a938-53633e6e0889" Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.376131 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerDied","Data":"655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6"} Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.376158 4745 scope.go:117] "RemoveContainer" containerID="985ebc79172b9d1d7d84df1c7d94646278cb7e581faefc497749504953786087" Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.402852 4745 generic.go:334] "Generic (PLEG): container finished" podID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerID="11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64" exitCode=2 Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.402949 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerDied","Data":"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64"} Jan 21 10:59:19 crc kubenswrapper[4745]: I0121 10:59:19.713272 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.015302 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" path="/var/lib/kubelet/pods/c12c404b-4d65-4d44-a58f-ab20031237eb/volumes" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.029797 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.029864 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.030553 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f"} pod="openstack/horizon-5cdbfc4d4d-pm6ln" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.030598 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" containerID="cri-o://379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f" gracePeriod=30 Jan 21 10:59:20 crc kubenswrapper[4745]: E0121 10:59:20.161012 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e55f349_abda_42f1_aa42_80e2169fbd6d.slice/crio-ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb.scope\": RecentStats: unable to find data in memory cache]" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.415147 4745 generic.go:334] "Generic (PLEG): container finished" podID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerID="ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb" exitCode=1 Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.415212 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8664d6b777-4qd85" event={"ID":"0e55f349-abda-42f1-aa42-80e2169fbd6d","Type":"ContainerDied","Data":"ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb"} Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.415586 4745 scope.go:117] "RemoveContainer" containerID="f09166449549acf25e3f8d9645ecbc9653e69dc554d67f522cd2192c4c2fb2fc" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.415929 4745 scope.go:117] "RemoveContainer" containerID="ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb" Jan 21 10:59:20 crc kubenswrapper[4745]: E0121 10:59:20.416202 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8664d6b777-4qd85_openstack(0e55f349-abda-42f1-aa42-80e2169fbd6d)\"" pod="openstack/heat-cfnapi-8664d6b777-4qd85" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.417887 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:20 crc kubenswrapper[4745]: E0121 10:59:20.418108 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8467cf6f5c-snz6t_openstack(82af10a4-9afe-4316-a938-53633e6e0889)\"" pod="openstack/heat-api-8467cf6f5c-snz6t" podUID="82af10a4-9afe-4316-a938-53633e6e0889" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.432367 4745 generic.go:334] "Generic (PLEG): container finished" podID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerID="7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07" exitCode=0 Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.432409 4745 generic.go:334] "Generic (PLEG): container finished" podID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerID="c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51" exitCode=0 Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.432438 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerDied","Data":"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07"} Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.432470 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerDied","Data":"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51"} Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.772743 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.772808 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.853981 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:20 crc kubenswrapper[4745]: I0121 10:59:20.854257 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.116958 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.117273 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-857f5f7474-w59t2" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" containerID="cri-o://e9dbd15b7055d340939a07062c7769bc063e9533659fc5f255ec4576988e4839" gracePeriod=60 Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.142091 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.142342 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-586848db89-qxdqf" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" containerID="cri-o://fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2" gracePeriod=60 Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.155309 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-586848db89-qxdqf" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": EOF" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.194003 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-d56bdb47c-z8b9m"] Jan 21 10:59:21 crc kubenswrapper[4745]: E0121 10:59:21.194362 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="dnsmasq-dns" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.194379 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="dnsmasq-dns" Jan 21 10:59:21 crc kubenswrapper[4745]: E0121 10:59:21.194408 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="init" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.194415 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="init" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.194640 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c12c404b-4d65-4d44-a58f-ab20031237eb" containerName="dnsmasq-dns" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.195192 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.197173 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.203305 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216266 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-combined-ca-bundle\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216334 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-internal-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216378 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data-custom\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216393 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-public-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2zv\" (UniqueName: \"kubernetes.io/projected/ad490541-95a2-46cd-97ef-7afa19e9e5f9-kube-api-access-sv2zv\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.216467 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.241420 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d56bdb47c-z8b9m"] Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.268607 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6d44b77d95-2fvz9"] Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.270405 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.280495 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.280786 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.317966 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-internal-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.318047 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data-custom\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.318069 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-public-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.318099 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2zv\" (UniqueName: \"kubernetes.io/projected/ad490541-95a2-46cd-97ef-7afa19e9e5f9-kube-api-access-sv2zv\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.318180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.318264 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-combined-ca-bundle\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.325375 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d44b77d95-2fvz9"] Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.331655 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-combined-ca-bundle\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.332857 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-internal-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.336329 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.342431 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-config-data-custom\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.343687 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ad490541-95a2-46cd-97ef-7afa19e9e5f9-public-tls-certs\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.348365 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2zv\" (UniqueName: \"kubernetes.io/projected/ad490541-95a2-46cd-97ef-7afa19e9e5f9-kube-api-access-sv2zv\") pod \"heat-cfnapi-d56bdb47c-z8b9m\" (UID: \"ad490541-95a2-46cd-97ef-7afa19e9e5f9\") " pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.420051 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-combined-ca-bundle\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.421262 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-public-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.421501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.421688 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-internal-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.421789 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data-custom\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.421909 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np7h8\" (UniqueName: \"kubernetes.io/projected/5a433daf-1db3-4263-9a10-28d03dc300b7-kube-api-access-np7h8\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.467755 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:21 crc kubenswrapper[4745]: E0121 10:59:21.468122 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8467cf6f5c-snz6t_openstack(82af10a4-9afe-4316-a938-53633e6e0889)\"" pod="openstack/heat-api-8467cf6f5c-snz6t" podUID="82af10a4-9afe-4316-a938-53633e6e0889" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.470060 4745 scope.go:117] "RemoveContainer" containerID="ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb" Jan 21 10:59:21 crc kubenswrapper[4745]: E0121 10:59:21.470329 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8664d6b777-4qd85_openstack(0e55f349-abda-42f1-aa42-80e2169fbd6d)\"" pod="openstack/heat-cfnapi-8664d6b777-4qd85" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524383 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-combined-ca-bundle\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524472 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-public-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524562 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524610 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-internal-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524629 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data-custom\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.524649 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np7h8\" (UniqueName: \"kubernetes.io/projected/5a433daf-1db3-4263-9a10-28d03dc300b7-kube-api-access-np7h8\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.530672 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.532797 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-public-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.533371 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-combined-ca-bundle\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.537912 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.543998 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-config-data-custom\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.547189 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a433daf-1db3-4263-9a10-28d03dc300b7-internal-tls-certs\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.576621 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np7h8\" (UniqueName: \"kubernetes.io/projected/5a433daf-1db3-4263-9a10-28d03dc300b7-kube-api-access-np7h8\") pod \"heat-api-6d44b77d95-2fvz9\" (UID: \"5a433daf-1db3-4263-9a10-28d03dc300b7\") " pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.618172 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:21 crc kubenswrapper[4745]: I0121 10:59:21.733701 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-857f5f7474-w59t2" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.179:8000/healthcheck\": read tcp 10.217.0.2:51790->10.217.0.179:8000: read: connection reset by peer" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.142881 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-857f5f7474-w59t2" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" probeResult="failure" output="Get \"http://10.217.0.179:8000/healthcheck\": dial tcp 10.217.0.179:8000: connect: connection refused" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.289132 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-d56bdb47c-z8b9m"] Jan 21 10:59:22 crc kubenswrapper[4745]: W0121 10:59:22.305869 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad490541_95a2_46cd_97ef_7afa19e9e5f9.slice/crio-974fca25c90b0360275cbcd8155ef2c036902a685f954b79df1975f52d182de2 WatchSource:0}: Error finding container 974fca25c90b0360275cbcd8155ef2c036902a685f954b79df1975f52d182de2: Status 404 returned error can't find the container with id 974fca25c90b0360275cbcd8155ef2c036902a685f954b79df1975f52d182de2 Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.490733 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" event={"ID":"ad490541-95a2-46cd-97ef-7afa19e9e5f9","Type":"ContainerStarted","Data":"974fca25c90b0360275cbcd8155ef2c036902a685f954b79df1975f52d182de2"} Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.492575 4745 generic.go:334] "Generic (PLEG): container finished" podID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerID="e9dbd15b7055d340939a07062c7769bc063e9533659fc5f255ec4576988e4839" exitCode=0 Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.492654 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-857f5f7474-w59t2" event={"ID":"5db7838a-7e98-4345-b958-56cfae3c59e7","Type":"ContainerDied","Data":"e9dbd15b7055d340939a07062c7769bc063e9533659fc5f255ec4576988e4839"} Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.492704 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-857f5f7474-w59t2" event={"ID":"5db7838a-7e98-4345-b958-56cfae3c59e7","Type":"ContainerDied","Data":"b7886ce9f5bad90d7390cf796e9dc8859573fce96582635a6e49f3abba9223d2"} Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.492719 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7886ce9f5bad90d7390cf796e9dc8859573fce96582635a6e49f3abba9223d2" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.494069 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.494146 4745 scope.go:117] "RemoveContainer" containerID="ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb" Jan 21 10:59:22 crc kubenswrapper[4745]: E0121 10:59:22.494459 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-8664d6b777-4qd85_openstack(0e55f349-abda-42f1-aa42-80e2169fbd6d)\"" pod="openstack/heat-cfnapi-8664d6b777-4qd85" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" Jan 21 10:59:22 crc kubenswrapper[4745]: E0121 10:59:22.494503 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-8467cf6f5c-snz6t_openstack(82af10a4-9afe-4316-a938-53633e6e0889)\"" pod="openstack/heat-api-8467cf6f5c-snz6t" podUID="82af10a4-9afe-4316-a938-53633e6e0889" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.508069 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.557584 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom\") pod \"5db7838a-7e98-4345-b958-56cfae3c59e7\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.557640 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle\") pod \"5db7838a-7e98-4345-b958-56cfae3c59e7\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.557691 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data\") pod \"5db7838a-7e98-4345-b958-56cfae3c59e7\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.557760 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dl2z\" (UniqueName: \"kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z\") pod \"5db7838a-7e98-4345-b958-56cfae3c59e7\" (UID: \"5db7838a-7e98-4345-b958-56cfae3c59e7\") " Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.559604 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6d44b77d95-2fvz9"] Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.568279 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z" (OuterVolumeSpecName: "kube-api-access-5dl2z") pod "5db7838a-7e98-4345-b958-56cfae3c59e7" (UID: "5db7838a-7e98-4345-b958-56cfae3c59e7"). InnerVolumeSpecName "kube-api-access-5dl2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.578504 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5db7838a-7e98-4345-b958-56cfae3c59e7" (UID: "5db7838a-7e98-4345-b958-56cfae3c59e7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.659919 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.659972 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dl2z\" (UniqueName: \"kubernetes.io/projected/5db7838a-7e98-4345-b958-56cfae3c59e7-kube-api-access-5dl2z\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.689054 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5db7838a-7e98-4345-b958-56cfae3c59e7" (UID: "5db7838a-7e98-4345-b958-56cfae3c59e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.759754 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data" (OuterVolumeSpecName: "config-data") pod "5db7838a-7e98-4345-b958-56cfae3c59e7" (UID: "5db7838a-7e98-4345-b958-56cfae3c59e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.761289 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:22 crc kubenswrapper[4745]: I0121 10:59:22.761313 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5db7838a-7e98-4345-b958-56cfae3c59e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.504584 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" event={"ID":"ad490541-95a2-46cd-97ef-7afa19e9e5f9","Type":"ContainerStarted","Data":"892650a7c09c86e50b51dde8ae05a322447af33b3f18002c1966902dce4e2b44"} Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.504913 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.506486 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-857f5f7474-w59t2" Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.506506 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d44b77d95-2fvz9" event={"ID":"5a433daf-1db3-4263-9a10-28d03dc300b7","Type":"ContainerStarted","Data":"214e0b5fedb1a92ed3d75077e7aa8835ae775ddb767d69cba2b307a5f34f1a6a"} Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.506565 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6d44b77d95-2fvz9" event={"ID":"5a433daf-1db3-4263-9a10-28d03dc300b7","Type":"ContainerStarted","Data":"c3397c50ac3c6998ccf7e5143e23193be56243fdb6235e76aec74846ed0c3b1e"} Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.506732 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.534755 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" podStartSLOduration=2.534736727 podStartE2EDuration="2.534736727s" podCreationTimestamp="2026-01-21 10:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:23.528559682 +0000 UTC m=+1347.989347280" watchObservedRunningTime="2026-01-21 10:59:23.534736727 +0000 UTC m=+1347.995524315" Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.568594 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.573878 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-857f5f7474-w59t2"] Jan 21 10:59:23 crc kubenswrapper[4745]: I0121 10:59:23.593467 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6d44b77d95-2fvz9" podStartSLOduration=2.593437096 podStartE2EDuration="2.593437096s" podCreationTimestamp="2026-01-21 10:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:23.592196462 +0000 UTC m=+1348.052984060" watchObservedRunningTime="2026-01-21 10:59:23.593437096 +0000 UTC m=+1348.054224694" Jan 21 10:59:24 crc kubenswrapper[4745]: I0121 10:59:24.009916 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" path="/var/lib/kubelet/pods/5db7838a-7e98-4345-b958-56cfae3c59e7/volumes" Jan 21 10:59:26 crc kubenswrapper[4745]: I0121 10:59:26.296978 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:27 crc kubenswrapper[4745]: I0121 10:59:27.717597 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-586848db89-qxdqf" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": read tcp 10.217.0.2:46852->10.217.0.178:8004: read: connection reset by peer" Jan 21 10:59:27 crc kubenswrapper[4745]: I0121 10:59:27.718015 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-586848db89-qxdqf" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" probeResult="failure" output="Get \"http://10.217.0.178:8004/healthcheck\": dial tcp 10.217.0.178:8004: connect: connection refused" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.413401 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.494992 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdgt2\" (UniqueName: \"kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2\") pod \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.495065 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data\") pod \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.495172 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom\") pod \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.495273 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle\") pod \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\" (UID: \"b4549eb9-0d8a-4d5a-9375-519f740f36ed\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.504935 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2" (OuterVolumeSpecName: "kube-api-access-pdgt2") pod "b4549eb9-0d8a-4d5a-9375-519f740f36ed" (UID: "b4549eb9-0d8a-4d5a-9375-519f740f36ed"). InnerVolumeSpecName "kube-api-access-pdgt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.517759 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b4549eb9-0d8a-4d5a-9375-519f740f36ed" (UID: "b4549eb9-0d8a-4d5a-9375-519f740f36ed"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.534890 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.556716 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b4549eb9-0d8a-4d5a-9375-519f740f36ed" (UID: "b4549eb9-0d8a-4d5a-9375-519f740f36ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.565788 4745 generic.go:334] "Generic (PLEG): container finished" podID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerID="c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58" exitCode=0 Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.565858 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerDied","Data":"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58"} Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.565887 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"db33933a-aeb2-443f-a4d1-e8b514bf57fb","Type":"ContainerDied","Data":"3ef6968f9a59306c5b27f87fdbbb66908e688b8b486f9fba8dd807188abe0b06"} Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.565905 4745 scope.go:117] "RemoveContainer" containerID="7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.566076 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.570316 4745 generic.go:334] "Generic (PLEG): container finished" podID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerID="fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2" exitCode=0 Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.570353 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-586848db89-qxdqf" event={"ID":"b4549eb9-0d8a-4d5a-9375-519f740f36ed","Type":"ContainerDied","Data":"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2"} Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.570375 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-586848db89-qxdqf" event={"ID":"b4549eb9-0d8a-4d5a-9375-519f740f36ed","Type":"ContainerDied","Data":"773cf1984bdc3228387794739863e8eefe1b8c419d0f2418207fca6447547496"} Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.570421 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-586848db89-qxdqf" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.598095 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.598167 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.598208 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdgt2\" (UniqueName: \"kubernetes.io/projected/b4549eb9-0d8a-4d5a-9375-519f740f36ed-kube-api-access-pdgt2\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.603140 4745 scope.go:117] "RemoveContainer" containerID="11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.616396 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data" (OuterVolumeSpecName: "config-data") pod "b4549eb9-0d8a-4d5a-9375-519f740f36ed" (UID: "b4549eb9-0d8a-4d5a-9375-519f740f36ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.648647 4745 scope.go:117] "RemoveContainer" containerID="c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.682205 4745 scope.go:117] "RemoveContainer" containerID="c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.699635 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.699736 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.699764 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.699903 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.699948 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.700038 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.700073 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzgl2\" (UniqueName: \"kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2\") pod \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\" (UID: \"db33933a-aeb2-443f-a4d1-e8b514bf57fb\") " Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.700564 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4549eb9-0d8a-4d5a-9375-519f740f36ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.703364 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.703472 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.704039 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2" (OuterVolumeSpecName: "kube-api-access-dzgl2") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "kube-api-access-dzgl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.705372 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts" (OuterVolumeSpecName: "scripts") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.708299 4745 scope.go:117] "RemoveContainer" containerID="7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07" Jan 21 10:59:28 crc kubenswrapper[4745]: E0121 10:59:28.712281 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07\": container with ID starting with 7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07 not found: ID does not exist" containerID="7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.712321 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07"} err="failed to get container status \"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07\": rpc error: code = NotFound desc = could not find container \"7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07\": container with ID starting with 7949f0f8d6d4c9030ca5d3030dec0873cb3fd834a0e35443a9011556867c4a07 not found: ID does not exist" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.712345 4745 scope.go:117] "RemoveContainer" containerID="11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64" Jan 21 10:59:28 crc kubenswrapper[4745]: E0121 10:59:28.714269 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64\": container with ID starting with 11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64 not found: ID does not exist" containerID="11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.714295 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64"} err="failed to get container status \"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64\": rpc error: code = NotFound desc = could not find container \"11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64\": container with ID starting with 11cde11f1831d2b0a0dec9b64e7408e9f504936f3e6aff145d7cd73f02592b64 not found: ID does not exist" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.714311 4745 scope.go:117] "RemoveContainer" containerID="c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51" Jan 21 10:59:28 crc kubenswrapper[4745]: E0121 10:59:28.718614 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51\": container with ID starting with c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51 not found: ID does not exist" containerID="c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.718644 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51"} err="failed to get container status \"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51\": rpc error: code = NotFound desc = could not find container \"c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51\": container with ID starting with c252959d5324fca17605530635b49b1ff12773eac482c906514743c61c067e51 not found: ID does not exist" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.718660 4745 scope.go:117] "RemoveContainer" containerID="c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58" Jan 21 10:59:28 crc kubenswrapper[4745]: E0121 10:59:28.722640 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58\": container with ID starting with c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58 not found: ID does not exist" containerID="c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.722676 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58"} err="failed to get container status \"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58\": rpc error: code = NotFound desc = could not find container \"c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58\": container with ID starting with c4c2d8101288b935252e62b81f06e163530da8a58c7f6aed9b6d473c0d047d58 not found: ID does not exist" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.722695 4745 scope.go:117] "RemoveContainer" containerID="fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.784775 4745 scope.go:117] "RemoveContainer" containerID="fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2" Jan 21 10:59:28 crc kubenswrapper[4745]: E0121 10:59:28.785217 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2\": container with ID starting with fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2 not found: ID does not exist" containerID="fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.785257 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2"} err="failed to get container status \"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2\": rpc error: code = NotFound desc = could not find container \"fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2\": container with ID starting with fd619a0824eeb0853b1d367ff508674ed43fd7ac9be19401c9ab4f2cd4d273b2 not found: ID does not exist" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.807508 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzgl2\" (UniqueName: \"kubernetes.io/projected/db33933a-aeb2-443f-a4d1-e8b514bf57fb-kube-api-access-dzgl2\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.807561 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.807570 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.807578 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/db33933a-aeb2-443f-a4d1-e8b514bf57fb-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.831690 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.906795 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.909009 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.909045 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:28 crc kubenswrapper[4745]: I0121 10:59:28.931587 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data" (OuterVolumeSpecName: "config-data") pod "db33933a-aeb2-443f-a4d1-e8b514bf57fb" (UID: "db33933a-aeb2-443f-a4d1-e8b514bf57fb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.011433 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db33933a-aeb2-443f-a4d1-e8b514bf57fb-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.049364 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.073793 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-586848db89-qxdqf"] Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.210150 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.224228 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245460 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245826 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245843 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245872 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-central-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245879 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-central-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245889 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245895 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245908 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="sg-core" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245914 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="sg-core" Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245926 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="proxy-httpd" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245931 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="proxy-httpd" Jan 21 10:59:29 crc kubenswrapper[4745]: E0121 10:59:29.245945 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-notification-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.245951 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-notification-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246110 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" containerName="heat-api" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246122 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-central-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246131 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="sg-core" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246142 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="proxy-httpd" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246152 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5db7838a-7e98-4345-b958-56cfae3c59e7" containerName="heat-cfnapi" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.246167 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" containerName="ceilometer-notification-agent" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.249222 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.253691 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.255670 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.267224 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320276 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4jbk\" (UniqueName: \"kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320333 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320372 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320417 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320565 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320614 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.320652 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.421717 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4jbk\" (UniqueName: \"kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422052 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422089 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422130 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422176 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422200 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422222 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.422750 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.423607 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.434829 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.439021 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.442291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.444479 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.447520 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4jbk\" (UniqueName: \"kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk\") pod \"ceilometer-0\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.564380 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.710986 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.711448 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.712668 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f"} pod="openstack/horizon-78cb545d88-xv4bf" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 10:59:29 crc kubenswrapper[4745]: I0121 10:59:29.712721 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" containerID="cri-o://3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f" gracePeriod=30 Jan 21 10:59:30 crc kubenswrapper[4745]: I0121 10:59:30.011403 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4549eb9-0d8a-4d5a-9375-519f740f36ed" path="/var/lib/kubelet/pods/b4549eb9-0d8a-4d5a-9375-519f740f36ed/volumes" Jan 21 10:59:30 crc kubenswrapper[4745]: I0121 10:59:30.011985 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db33933a-aeb2-443f-a4d1-e8b514bf57fb" path="/var/lib/kubelet/pods/db33933a-aeb2-443f-a4d1-e8b514bf57fb/volumes" Jan 21 10:59:30 crc kubenswrapper[4745]: I0121 10:59:30.123689 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:30 crc kubenswrapper[4745]: I0121 10:59:30.592607 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerStarted","Data":"6e2c5fb0aad885b89d2a75d179f50fb5f21ccf46ce0963bd2923ae6938d936e0"} Jan 21 10:59:31 crc kubenswrapper[4745]: I0121 10:59:31.602179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerStarted","Data":"c2e0c913bec92bb83b3bba6c2dc48ebf8e7d0e70a623abe101c849400ca0c572"} Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.317740 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.318387 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-httpd" containerID="cri-o://e3a9af177fbd76388849a99670cc03ceb63e58c65a61533b8636f9b26dac0aef" gracePeriod=30 Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.318718 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-log" containerID="cri-o://eba15377726a51f5cbf390f21c83a10b8e4315b98f346b4a1825da1de8255e12" gracePeriod=30 Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.615420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerStarted","Data":"c135935c2e3790079244fcd9836eefe56b389d93137008398598d212dcd6a92b"} Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.623718 4745 generic.go:334] "Generic (PLEG): container finished" podID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerID="eba15377726a51f5cbf390f21c83a10b8e4315b98f346b4a1825da1de8255e12" exitCode=143 Jan 21 10:59:32 crc kubenswrapper[4745]: I0121 10:59:32.623774 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerDied","Data":"eba15377726a51f5cbf390f21c83a10b8e4315b98f346b4a1825da1de8255e12"} Jan 21 10:59:33 crc kubenswrapper[4745]: I0121 10:59:33.635125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerStarted","Data":"af942ac66315c8a8fac14ceafb31f02689dcab044a2a815069545bded27f4ba8"} Jan 21 10:59:33 crc kubenswrapper[4745]: I0121 10:59:33.713046 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-d56bdb47c-z8b9m" Jan 21 10:59:33 crc kubenswrapper[4745]: I0121 10:59:33.765951 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:33 crc kubenswrapper[4745]: I0121 10:59:33.937846 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6d44b77d95-2fvz9" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.014996 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.185736 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.341325 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.449317 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle\") pod \"0e55f349-abda-42f1-aa42-80e2169fbd6d\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.450862 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2ql8\" (UniqueName: \"kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8\") pod \"0e55f349-abda-42f1-aa42-80e2169fbd6d\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.451478 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data\") pod \"0e55f349-abda-42f1-aa42-80e2169fbd6d\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.451817 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom\") pod \"0e55f349-abda-42f1-aa42-80e2169fbd6d\" (UID: \"0e55f349-abda-42f1-aa42-80e2169fbd6d\") " Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.485932 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8" (OuterVolumeSpecName: "kube-api-access-r2ql8") pod "0e55f349-abda-42f1-aa42-80e2169fbd6d" (UID: "0e55f349-abda-42f1-aa42-80e2169fbd6d"). InnerVolumeSpecName "kube-api-access-r2ql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.489624 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0e55f349-abda-42f1-aa42-80e2169fbd6d" (UID: "0e55f349-abda-42f1-aa42-80e2169fbd6d"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.543990 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data" (OuterVolumeSpecName: "config-data") pod "0e55f349-abda-42f1-aa42-80e2169fbd6d" (UID: "0e55f349-abda-42f1-aa42-80e2169fbd6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.544341 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0e55f349-abda-42f1-aa42-80e2169fbd6d" (UID: "0e55f349-abda-42f1-aa42-80e2169fbd6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.554041 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.554065 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2ql8\" (UniqueName: \"kubernetes.io/projected/0e55f349-abda-42f1-aa42-80e2169fbd6d-kube-api-access-r2ql8\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.554078 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.554087 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0e55f349-abda-42f1-aa42-80e2169fbd6d-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.647782 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8664d6b777-4qd85" event={"ID":"0e55f349-abda-42f1-aa42-80e2169fbd6d","Type":"ContainerDied","Data":"673784124bd1e0ec0d20f2c0fff928686b398011684e064595672ac4f4f48d67"} Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.648105 4745 scope.go:117] "RemoveContainer" containerID="ee13d6e7da84e9693b4718d40e9e4c8766ec4d7843dcb42c97790835d621dedb" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.648199 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8664d6b777-4qd85" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.677342 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerStarted","Data":"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943"} Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.677500 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-8467cf6f5c-snz6t" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" containerID="cri-o://740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943" gracePeriod=60 Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.677771 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.701662 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-8467cf6f5c-snz6t" podStartSLOduration=19.701642024 podStartE2EDuration="19.701642024s" podCreationTimestamp="2026-01-21 10:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:34.697253965 +0000 UTC m=+1359.158041573" watchObservedRunningTime="2026-01-21 10:59:34.701642024 +0000 UTC m=+1359.162429622" Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.729025 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.740814 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-8664d6b777-4qd85"] Jan 21 10:59:34 crc kubenswrapper[4745]: I0121 10:59:34.970893 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.328836 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.474293 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data\") pod \"82af10a4-9afe-4316-a938-53633e6e0889\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.474379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom\") pod \"82af10a4-9afe-4316-a938-53633e6e0889\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.474588 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle\") pod \"82af10a4-9afe-4316-a938-53633e6e0889\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.474707 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szmxw\" (UniqueName: \"kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw\") pod \"82af10a4-9afe-4316-a938-53633e6e0889\" (UID: \"82af10a4-9afe-4316-a938-53633e6e0889\") " Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.480525 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw" (OuterVolumeSpecName: "kube-api-access-szmxw") pod "82af10a4-9afe-4316-a938-53633e6e0889" (UID: "82af10a4-9afe-4316-a938-53633e6e0889"). InnerVolumeSpecName "kube-api-access-szmxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.489645 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82af10a4-9afe-4316-a938-53633e6e0889" (UID: "82af10a4-9afe-4316-a938-53633e6e0889"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.543751 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data" (OuterVolumeSpecName: "config-data") pod "82af10a4-9afe-4316-a938-53633e6e0889" (UID: "82af10a4-9afe-4316-a938-53633e6e0889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.550594 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82af10a4-9afe-4316-a938-53633e6e0889" (UID: "82af10a4-9afe-4316-a938-53633e6e0889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.577242 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szmxw\" (UniqueName: \"kubernetes.io/projected/82af10a4-9afe-4316-a938-53633e6e0889-kube-api-access-szmxw\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.577283 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.577293 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.577304 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82af10a4-9afe-4316-a938-53633e6e0889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.696212 4745 generic.go:334] "Generic (PLEG): container finished" podID="82af10a4-9afe-4316-a938-53633e6e0889" containerID="740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943" exitCode=1 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.696290 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerDied","Data":"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943"} Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.696315 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-8467cf6f5c-snz6t" event={"ID":"82af10a4-9afe-4316-a938-53633e6e0889","Type":"ContainerDied","Data":"adfde8e8a47443aa808c2f3a8ab69422532a98d3edbc13c3fc3e583eaa45088a"} Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.696352 4745 scope.go:117] "RemoveContainer" containerID="740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.696472 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-8467cf6f5c-snz6t" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705018 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerStarted","Data":"38a013aeeb9126d996019d96c0df2d7bd8b833c28d6fae4c71bbd65dc49fc217"} Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705191 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-central-agent" containerID="cri-o://c2e0c913bec92bb83b3bba6c2dc48ebf8e7d0e70a623abe101c849400ca0c572" gracePeriod=30 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705286 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705510 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="proxy-httpd" containerID="cri-o://38a013aeeb9126d996019d96c0df2d7bd8b833c28d6fae4c71bbd65dc49fc217" gracePeriod=30 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705591 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-notification-agent" containerID="cri-o://c135935c2e3790079244fcd9836eefe56b389d93137008398598d212dcd6a92b" gracePeriod=30 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.705721 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="sg-core" containerID="cri-o://af942ac66315c8a8fac14ceafb31f02689dcab044a2a815069545bded27f4ba8" gracePeriod=30 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.724840 4745 generic.go:334] "Generic (PLEG): container finished" podID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerID="e3a9af177fbd76388849a99670cc03ceb63e58c65a61533b8636f9b26dac0aef" exitCode=0 Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.724896 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerDied","Data":"e3a9af177fbd76388849a99670cc03ceb63e58c65a61533b8636f9b26dac0aef"} Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.734448 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.040349 podStartE2EDuration="6.734427477s" podCreationTimestamp="2026-01-21 10:59:29 +0000 UTC" firstStartedPulling="2026-01-21 10:59:30.12760442 +0000 UTC m=+1354.588392018" lastFinishedPulling="2026-01-21 10:59:34.821682897 +0000 UTC m=+1359.282470495" observedRunningTime="2026-01-21 10:59:35.728229738 +0000 UTC m=+1360.189017336" watchObservedRunningTime="2026-01-21 10:59:35.734427477 +0000 UTC m=+1360.195215075" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.755508 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.759463 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.769280 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-8467cf6f5c-snz6t"] Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.808997 4745 scope.go:117] "RemoveContainer" containerID="740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943" Jan 21 10:59:35 crc kubenswrapper[4745]: E0121 10:59:35.809812 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943\": container with ID starting with 740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943 not found: ID does not exist" containerID="740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.809862 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943"} err="failed to get container status \"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943\": rpc error: code = NotFound desc = could not find container \"740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943\": container with ID starting with 740783cea649750e46688617e9635fc3c309b340c0b462165a24b1a248812943 not found: ID does not exist" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.809885 4745 scope.go:117] "RemoveContainer" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:35 crc kubenswrapper[4745]: E0121 10:59:35.810206 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6\": container with ID starting with 655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6 not found: ID does not exist" containerID="655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.810228 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6"} err="failed to get container status \"655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6\": rpc error: code = NotFound desc = could not find container \"655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6\": container with ID starting with 655ce79cacc64a52d1d9bd327cff4652c0330de018c6dbc7a3e5ce7d2773d7a6 not found: ID does not exist" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.850314 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-67ddbd4cb4-nt52k" Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.982096 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:35 crc kubenswrapper[4745]: I0121 10:59:35.984149 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-58b4779467-f9wqf" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" containerID="cri-o://e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" gracePeriod=60 Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.018013 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" path="/var/lib/kubelet/pods/0e55f349-abda-42f1-aa42-80e2169fbd6d/volumes" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.045418 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82af10a4-9afe-4316-a938-53633e6e0889" path="/var/lib/kubelet/pods/82af10a4-9afe-4316-a938-53633e6e0889/volumes" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.154658 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.256820 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.275200 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.282821 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.282906 4745 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-58b4779467-f9wqf" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293701 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293805 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293862 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6p8\" (UniqueName: \"kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293890 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293923 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293945 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.293989 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.294054 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs\") pod \"e982fb4c-3818-4f04-b7ed-c32666261f07\" (UID: \"e982fb4c-3818-4f04-b7ed-c32666261f07\") " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.294574 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.296401 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs" (OuterVolumeSpecName: "logs") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.317048 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8" (OuterVolumeSpecName: "kube-api-access-ph6p8") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "kube-api-access-ph6p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.317172 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.329880 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts" (OuterVolumeSpecName: "scripts") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.396772 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.396924 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6p8\" (UniqueName: \"kubernetes.io/projected/e982fb4c-3818-4f04-b7ed-c32666261f07-kube-api-access-ph6p8\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.397006 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.397095 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.397195 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e982fb4c-3818-4f04-b7ed-c32666261f07-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.402673 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.458956 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.483197 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data" (OuterVolumeSpecName: "config-data") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.498886 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.498926 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.498938 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.508812 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e982fb4c-3818-4f04-b7ed-c32666261f07" (UID: "e982fb4c-3818-4f04-b7ed-c32666261f07"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.600388 4745 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e982fb4c-3818-4f04-b7ed-c32666261f07-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.734158 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"e982fb4c-3818-4f04-b7ed-c32666261f07","Type":"ContainerDied","Data":"d71081d39486a443031698e1dcb8bd5b439b66a6237d1b1b24d484dc86f5dc2d"} Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.734193 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.734215 4745 scope.go:117] "RemoveContainer" containerID="e3a9af177fbd76388849a99670cc03ceb63e58c65a61533b8636f9b26dac0aef" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745267 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerID="38a013aeeb9126d996019d96c0df2d7bd8b833c28d6fae4c71bbd65dc49fc217" exitCode=0 Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745295 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerID="af942ac66315c8a8fac14ceafb31f02689dcab044a2a815069545bded27f4ba8" exitCode=2 Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745304 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerID="c135935c2e3790079244fcd9836eefe56b389d93137008398598d212dcd6a92b" exitCode=0 Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745318 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerDied","Data":"38a013aeeb9126d996019d96c0df2d7bd8b833c28d6fae4c71bbd65dc49fc217"} Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745363 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerDied","Data":"af942ac66315c8a8fac14ceafb31f02689dcab044a2a815069545bded27f4ba8"} Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.745374 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerDied","Data":"c135935c2e3790079244fcd9836eefe56b389d93137008398598d212dcd6a92b"} Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.767567 4745 scope.go:117] "RemoveContainer" containerID="eba15377726a51f5cbf390f21c83a10b8e4315b98f346b4a1825da1de8255e12" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.776284 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.801087 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.831700 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832544 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832561 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832590 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832596 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832627 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832634 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832655 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832661 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832674 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832681 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832703 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-log" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832709 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-log" Jan 21 10:59:36 crc kubenswrapper[4745]: E0121 10:59:36.832720 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-httpd" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.832726 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-httpd" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833034 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-log" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833062 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833075 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" containerName="glance-httpd" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833083 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833096 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833103 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e55f349-abda-42f1-aa42-80e2169fbd6d" containerName="heat-cfnapi" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.833114 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="82af10a4-9afe-4316-a938-53633e6e0889" containerName="heat-api" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.835084 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.851398 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.851637 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 21 10:59:36 crc kubenswrapper[4745]: I0121 10:59:36.858736 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.025740 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxl9v\" (UniqueName: \"kubernetes.io/projected/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-kube-api-access-cxl9v\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.025812 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.025857 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.025929 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.025990 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.026016 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.026051 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.026081 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.127928 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.127970 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128009 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128028 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128123 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxl9v\" (UniqueName: \"kubernetes.io/projected/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-kube-api-access-cxl9v\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128261 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.128697 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.129970 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.130310 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-logs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.138171 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.138958 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.140444 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.143237 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.154549 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxl9v\" (UniqueName: \"kubernetes.io/projected/2d8d9e72-16c4-4372-8d2f-d116c68a4d2a-kube-api-access-cxl9v\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.175622 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-internal-api-0\" (UID: \"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a\") " pod="openstack/glance-default-internal-api-0" Jan 21 10:59:37 crc kubenswrapper[4745]: I0121 10:59:37.463876 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:38 crc kubenswrapper[4745]: I0121 10:59:38.011899 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e982fb4c-3818-4f04-b7ed-c32666261f07" path="/var/lib/kubelet/pods/e982fb4c-3818-4f04-b7ed-c32666261f07/volumes" Jan 21 10:59:38 crc kubenswrapper[4745]: I0121 10:59:38.210441 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 21 10:59:38 crc kubenswrapper[4745]: W0121 10:59:38.235949 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d8d9e72_16c4_4372_8d2f_d116c68a4d2a.slice/crio-63ef61b7a64151a4b8cb382c9be7234b23a074a6c9eca4561a4a38691934fb44 WatchSource:0}: Error finding container 63ef61b7a64151a4b8cb382c9be7234b23a074a6c9eca4561a4a38691934fb44: Status 404 returned error can't find the container with id 63ef61b7a64151a4b8cb382c9be7234b23a074a6c9eca4561a4a38691934fb44 Jan 21 10:59:38 crc kubenswrapper[4745]: I0121 10:59:38.791836 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a","Type":"ContainerStarted","Data":"63ef61b7a64151a4b8cb382c9be7234b23a074a6c9eca4561a4a38691934fb44"} Jan 21 10:59:39 crc kubenswrapper[4745]: I0121 10:59:39.801878 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a","Type":"ContainerStarted","Data":"88eff79456cf6542933e02446b06b8ed16a61b2aff6f8458cce37f9b5ef0f549"} Jan 21 10:59:39 crc kubenswrapper[4745]: I0121 10:59:39.802218 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2d8d9e72-16c4-4372-8d2f-d116c68a4d2a","Type":"ContainerStarted","Data":"90f5dcc6bd6e23e150a081b83f03c786726dd15767f6a9889ceb236dc3bb2ec3"} Jan 21 10:59:39 crc kubenswrapper[4745]: I0121 10:59:39.823741 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.823719235 podStartE2EDuration="3.823719235s" podCreationTimestamp="2026-01-21 10:59:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:39.818701868 +0000 UTC m=+1364.279489476" watchObservedRunningTime="2026-01-21 10:59:39.823719235 +0000 UTC m=+1364.284506823" Jan 21 10:59:42 crc kubenswrapper[4745]: I0121 10:59:42.845482 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerID="c2e0c913bec92bb83b3bba6c2dc48ebf8e7d0e70a623abe101c849400ca0c572" exitCode=0 Jan 21 10:59:42 crc kubenswrapper[4745]: I0121 10:59:42.846340 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerDied","Data":"c2e0c913bec92bb83b3bba6c2dc48ebf8e7d0e70a623abe101c849400ca0c572"} Jan 21 10:59:42 crc kubenswrapper[4745]: I0121 10:59:42.927819 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080059 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080137 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080181 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080275 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4jbk\" (UniqueName: \"kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080320 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080368 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080424 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data\") pod \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\" (UID: \"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4\") " Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.080659 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.081240 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.081518 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.088077 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk" (OuterVolumeSpecName: "kube-api-access-m4jbk") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "kube-api-access-m4jbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.106985 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts" (OuterVolumeSpecName: "scripts") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.116291 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.186246 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.186279 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4jbk\" (UniqueName: \"kubernetes.io/projected/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-kube-api-access-m4jbk\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.186291 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.186301 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.191715 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.224050 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data" (OuterVolumeSpecName: "config-data") pod "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" (UID: "b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.287691 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.287721 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.861040 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4","Type":"ContainerDied","Data":"6e2c5fb0aad885b89d2a75d179f50fb5f21ccf46ce0963bd2923ae6938d936e0"} Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.861098 4745 scope.go:117] "RemoveContainer" containerID="38a013aeeb9126d996019d96c0df2d7bd8b833c28d6fae4c71bbd65dc49fc217" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.861274 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.897963 4745 scope.go:117] "RemoveContainer" containerID="af942ac66315c8a8fac14ceafb31f02689dcab044a2a815069545bded27f4ba8" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.922773 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.928886 4745 scope.go:117] "RemoveContainer" containerID="c135935c2e3790079244fcd9836eefe56b389d93137008398598d212dcd6a92b" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.936627 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.957993 4745 scope.go:117] "RemoveContainer" containerID="c2e0c913bec92bb83b3bba6c2dc48ebf8e7d0e70a623abe101c849400ca0c572" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.963159 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:43 crc kubenswrapper[4745]: E0121 10:59:43.963948 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="sg-core" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.964145 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="sg-core" Jan 21 10:59:43 crc kubenswrapper[4745]: E0121 10:59:43.964247 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-central-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.964326 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-central-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: E0121 10:59:43.964397 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-notification-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.964462 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-notification-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: E0121 10:59:43.964578 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="proxy-httpd" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.964656 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="proxy-httpd" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.964952 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-notification-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.965062 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="proxy-httpd" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.965139 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="sg-core" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.966568 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" containerName="ceilometer-central-agent" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.968513 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.974329 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 10:59:43 crc kubenswrapper[4745]: I0121 10:59:43.975213 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126593 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126691 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126723 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps64w\" (UniqueName: \"kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126755 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126830 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126882 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.126930 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.205326 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4" path="/var/lib/kubelet/pods/b5b65f0d-f37e-4eea-92e4-b45e7a1f34f4/volumes" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.207857 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.207904 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-kqm6s"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.210774 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqm6s"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.210819 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6hd2t"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.211139 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.213265 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.224181 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hd2t"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234604 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234716 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234805 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234854 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.234985 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps64w\" (UniqueName: \"kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.235023 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.237099 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.238655 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.256936 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.257623 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.258092 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.260688 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.261587 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps64w\" (UniqueName: \"kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w\") pod \"ceilometer-0\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.327694 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-zdc2f"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.328961 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.344687 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.344823 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.344889 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rb52\" (UniqueName: \"kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.345890 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7rpw\" (UniqueName: \"kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.357090 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.365628 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-ea70-account-create-update-9hjhh"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.367929 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.397068 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zdc2f"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.417307 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447195 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447227 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstjb\" (UniqueName: \"kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447255 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rb52\" (UniqueName: \"kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447295 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447348 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mll6f\" (UniqueName: \"kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447413 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.447435 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7rpw\" (UniqueName: \"kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.452135 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.452857 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.473383 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ea70-account-create-update-9hjhh"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.490731 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7rpw\" (UniqueName: \"kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw\") pod \"nova-api-db-create-kqm6s\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.497603 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rb52\" (UniqueName: \"kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52\") pod \"nova-cell0-db-create-6hd2t\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.556762 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.556939 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.557028 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mll6f\" (UniqueName: \"kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.557105 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.557153 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstjb\" (UniqueName: \"kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.558445 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.559308 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.600678 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mll6f\" (UniqueName: \"kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f\") pod \"nova-api-ea70-account-create-update-9hjhh\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.608890 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstjb\" (UniqueName: \"kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb\") pod \"nova-cell1-db-create-zdc2f\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.617339 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-46df-account-create-update-ckz9b"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.618605 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.623473 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.630435 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46df-account-create-update-ckz9b"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.655820 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.665058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.665232 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lsb\" (UniqueName: \"kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.687058 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.702980 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-d3be-account-create-update-b5s94"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.704307 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.719163 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.767091 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.767159 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2grx\" (UniqueName: \"kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.767230 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57lsb\" (UniqueName: \"kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.767273 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.774266 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.783640 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d3be-account-create-update-b5s94"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.788676 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.807829 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.808141 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-log" containerID="cri-o://8b5d6aff5cd21f1dab9c1e52236e926cbc75280823886bb39f699a251dbe75fe" gracePeriod=30 Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.808483 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-httpd" containerID="cri-o://f25e04d1c510833a32aa438356534008fa152ac0df553795c6bbfbdfaa3bf8ce" gracePeriod=30 Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.848097 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lsb\" (UniqueName: \"kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb\") pod \"nova-cell0-46df-account-create-update-ckz9b\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.872415 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2grx\" (UniqueName: \"kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.873392 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.874493 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:44 crc kubenswrapper[4745]: I0121 10:59:44.936178 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2grx\" (UniqueName: \"kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx\") pod \"nova-cell1-d3be-account-create-update-b5s94\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.145141 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.172078 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.431914 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:45 crc kubenswrapper[4745]: W0121 10:59:45.452851 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f2e0a6b_c7e8_44c5_b4f9_e3d843dead70.slice/crio-b03ded853711c85ba1e881681420816ba5e5a88a2a7ec176b4b3e8c536d4925e WatchSource:0}: Error finding container b03ded853711c85ba1e881681420816ba5e5a88a2a7ec176b4b3e8c536d4925e: Status 404 returned error can't find the container with id b03ded853711c85ba1e881681420816ba5e5a88a2a7ec176b4b3e8c536d4925e Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.598340 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-zdc2f"] Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.630607 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-kqm6s"] Jan 21 10:59:45 crc kubenswrapper[4745]: W0121 10:59:45.645106 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod867be566_a37c_499e_9d6b_026bbc370fe5.slice/crio-fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6 WatchSource:0}: Error finding container fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6: Status 404 returned error can't find the container with id fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6 Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.658560 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hd2t"] Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.945865 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-ea70-account-create-update-9hjhh"] Jan 21 10:59:45 crc kubenswrapper[4745]: I0121 10:59:45.970307 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-d3be-account-create-update-b5s94"] Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:45.993881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zdc2f" event={"ID":"867be566-a37c-499e-9d6b-026bbc370fe5","Type":"ContainerStarted","Data":"fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6"} Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.023679 4745 generic.go:334] "Generic (PLEG): container finished" podID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerID="8b5d6aff5cd21f1dab9c1e52236e926cbc75280823886bb39f699a251dbe75fe" exitCode=143 Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.064908 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerDied","Data":"8b5d6aff5cd21f1dab9c1e52236e926cbc75280823886bb39f699a251dbe75fe"} Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.065037 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqm6s" event={"ID":"0c439a3b-429b-45f7-be39-a4fcbcf904b8","Type":"ContainerStarted","Data":"68f7037d47fc4091eea6f84cf29f5ec9e0f8881df3379719e16f77f95b9e11fa"} Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.065051 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerStarted","Data":"b03ded853711c85ba1e881681420816ba5e5a88a2a7ec176b4b3e8c536d4925e"} Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.065065 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-46df-account-create-update-ckz9b"] Jan 21 10:59:46 crc kubenswrapper[4745]: I0121 10:59:46.067736 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hd2t" event={"ID":"c096e9f7-6065-4656-82c3-167bd595c303","Type":"ContainerStarted","Data":"a8f7a10ef15e63682d2b41a85c14ade26e579851f6e85b5b141dc9bf8614b511"} Jan 21 10:59:46 crc kubenswrapper[4745]: E0121 10:59:46.273274 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:46 crc kubenswrapper[4745]: E0121 10:59:46.275818 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:46 crc kubenswrapper[4745]: E0121 10:59:46.280314 4745 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 21 10:59:46 crc kubenswrapper[4745]: E0121 10:59:46.280364 4745 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-58b4779467-f9wqf" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.115014 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" event={"ID":"4008d5c8-f775-45b9-bffc-fcbbd41768ba","Type":"ContainerStarted","Data":"7dfa3637b16cbe2749f299aa03af13f78f46d746c3d19c167f62d3973b8553ec"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.115392 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" event={"ID":"4008d5c8-f775-45b9-bffc-fcbbd41768ba","Type":"ContainerStarted","Data":"1983f5fbbe655b67f0a519e1cb7889c4b834f8dba58e79154b65088d7c28e053"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.124607 4745 generic.go:334] "Generic (PLEG): container finished" podID="c096e9f7-6065-4656-82c3-167bd595c303" containerID="46e3d1395eabb7cca6c8a7c2b76bc9fdbc5806b059d8b8e959b93482eee75116" exitCode=0 Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.124817 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hd2t" event={"ID":"c096e9f7-6065-4656-82c3-167bd595c303","Type":"ContainerDied","Data":"46e3d1395eabb7cca6c8a7c2b76bc9fdbc5806b059d8b8e959b93482eee75116"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.130177 4745 generic.go:334] "Generic (PLEG): container finished" podID="867be566-a37c-499e-9d6b-026bbc370fe5" containerID="22a5791efd23720cc761079399543e5686cf800f65231d2edb1e3221d13f2a53" exitCode=0 Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.130368 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zdc2f" event={"ID":"867be566-a37c-499e-9d6b-026bbc370fe5","Type":"ContainerDied","Data":"22a5791efd23720cc761079399543e5686cf800f65231d2edb1e3221d13f2a53"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.142200 4745 generic.go:334] "Generic (PLEG): container finished" podID="0c439a3b-429b-45f7-be39-a4fcbcf904b8" containerID="e460acd1f01f3d93f03a584dadc6ecf18d2e36b6e0ad643f20d58b0b836cdab0" exitCode=0 Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.142265 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqm6s" event={"ID":"0c439a3b-429b-45f7-be39-a4fcbcf904b8","Type":"ContainerDied","Data":"e460acd1f01f3d93f03a584dadc6ecf18d2e36b6e0ad643f20d58b0b836cdab0"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.150586 4745 generic.go:334] "Generic (PLEG): container finished" podID="25000567-9488-4bd2-8b57-a2b4b1f41366" containerID="e82ed3c3961cdea0b37f759d00cb79a10c4370c25f360fec1991f8ed3ff84fa6" exitCode=0 Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.150743 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea70-account-create-update-9hjhh" event={"ID":"25000567-9488-4bd2-8b57-a2b4b1f41366","Type":"ContainerDied","Data":"e82ed3c3961cdea0b37f759d00cb79a10c4370c25f360fec1991f8ed3ff84fa6"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.150816 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea70-account-create-update-9hjhh" event={"ID":"25000567-9488-4bd2-8b57-a2b4b1f41366","Type":"ContainerStarted","Data":"6d2eb67b7c854be18ea9eaeb2b7ac1f28b03e828866fb9796c303ee0198a885a"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.170376 4745 generic.go:334] "Generic (PLEG): container finished" podID="7eaf1233-ea59-4baf-ab46-f24a0b142b80" containerID="43d82d5a3110e11893aa1467f6d3aa403e213bc5a83a286643fbf64cf8b0853d" exitCode=0 Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.170506 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" event={"ID":"7eaf1233-ea59-4baf-ab46-f24a0b142b80","Type":"ContainerDied","Data":"43d82d5a3110e11893aa1467f6d3aa403e213bc5a83a286643fbf64cf8b0853d"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.171109 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" event={"ID":"7eaf1233-ea59-4baf-ab46-f24a0b142b80","Type":"ContainerStarted","Data":"1cc4726cae8313307419cf5d282feff822203d64dcf32b40cc28698babd0c95b"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.178503 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerStarted","Data":"252ce9c30a02708cbdae0f9bce6025c88e292fb590512fb06748537bd320e112"} Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.464267 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.464832 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.545209 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:47 crc kubenswrapper[4745]: I0121 10:59:47.554198 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.098141 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": read tcp 10.217.0.2:52770->10.217.0.155:9292: read: connection reset by peer" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.099442 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.155:9292/healthcheck\": read tcp 10.217.0.2:52762->10.217.0.155:9292: read: connection reset by peer" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.239568 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerStarted","Data":"e7de4d1251266c611e2b74fab0b6d66e8dd3496e8af2cab45005be02b309b10e"} Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.245135 4745 generic.go:334] "Generic (PLEG): container finished" podID="4008d5c8-f775-45b9-bffc-fcbbd41768ba" containerID="7dfa3637b16cbe2749f299aa03af13f78f46d746c3d19c167f62d3973b8553ec" exitCode=0 Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.245229 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" event={"ID":"4008d5c8-f775-45b9-bffc-fcbbd41768ba","Type":"ContainerDied","Data":"7dfa3637b16cbe2749f299aa03af13f78f46d746c3d19c167f62d3973b8553ec"} Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.248898 4745 generic.go:334] "Generic (PLEG): container finished" podID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerID="f25e04d1c510833a32aa438356534008fa152ac0df553795c6bbfbdfaa3bf8ce" exitCode=0 Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.249130 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerDied","Data":"f25e04d1c510833a32aa438356534008fa152ac0df553795c6bbfbdfaa3bf8ce"} Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.251374 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.251397 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.856544 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.955724 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57lsb\" (UniqueName: \"kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb\") pod \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.955963 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts\") pod \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\" (UID: \"4008d5c8-f775-45b9-bffc-fcbbd41768ba\") " Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.957705 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4008d5c8-f775-45b9-bffc-fcbbd41768ba" (UID: "4008d5c8-f775-45b9-bffc-fcbbd41768ba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:48 crc kubenswrapper[4745]: I0121 10:59:48.992872 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb" (OuterVolumeSpecName: "kube-api-access-57lsb") pod "4008d5c8-f775-45b9-bffc-fcbbd41768ba" (UID: "4008d5c8-f775-45b9-bffc-fcbbd41768ba"). InnerVolumeSpecName "kube-api-access-57lsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.060369 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4008d5c8-f775-45b9-bffc-fcbbd41768ba-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.060410 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57lsb\" (UniqueName: \"kubernetes.io/projected/4008d5c8-f775-45b9-bffc-fcbbd41768ba-kube-api-access-57lsb\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.322499 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" event={"ID":"4008d5c8-f775-45b9-bffc-fcbbd41768ba","Type":"ContainerDied","Data":"1983f5fbbe655b67f0a519e1cb7889c4b834f8dba58e79154b65088d7c28e053"} Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.324446 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1983f5fbbe655b67f0a519e1cb7889c4b834f8dba58e79154b65088d7c28e053" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.324753 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-46df-account-create-update-ckz9b" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.338957 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.342023 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerStarted","Data":"15d5501202ef746503c44af40a1d371da51f31c0d8eadb13bdc485ee08acfb00"} Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.467740 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.492212 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2grx\" (UniqueName: \"kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx\") pod \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.492602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts\") pod \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\" (UID: \"7eaf1233-ea59-4baf-ab46-f24a0b142b80\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.494922 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7eaf1233-ea59-4baf-ab46-f24a0b142b80" (UID: "7eaf1233-ea59-4baf-ab46-f24a0b142b80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.522895 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx" (OuterVolumeSpecName: "kube-api-access-r2grx") pod "7eaf1233-ea59-4baf-ab46-f24a0b142b80" (UID: "7eaf1233-ea59-4baf-ab46-f24a0b142b80"). InnerVolumeSpecName "kube-api-access-r2grx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.531251 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.577111 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.595280 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rb52\" (UniqueName: \"kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52\") pod \"c096e9f7-6065-4656-82c3-167bd595c303\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.595904 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts\") pod \"c096e9f7-6065-4656-82c3-167bd595c303\" (UID: \"c096e9f7-6065-4656-82c3-167bd595c303\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.596488 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2grx\" (UniqueName: \"kubernetes.io/projected/7eaf1233-ea59-4baf-ab46-f24a0b142b80-kube-api-access-r2grx\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.596512 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7eaf1233-ea59-4baf-ab46-f24a0b142b80-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.597287 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c096e9f7-6065-4656-82c3-167bd595c303" (UID: "c096e9f7-6065-4656-82c3-167bd595c303"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.625665 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.656677 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52" (OuterVolumeSpecName: "kube-api-access-5rb52") pod "c096e9f7-6065-4656-82c3-167bd595c303" (UID: "c096e9f7-6065-4656-82c3-167bd595c303"). InnerVolumeSpecName "kube-api-access-5rb52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708072 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708621 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708664 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zw4t\" (UniqueName: \"kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708732 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708799 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7rpw\" (UniqueName: \"kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw\") pod \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708909 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.708974 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709064 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts\") pod \"25000567-9488-4bd2-8b57-a2b4b1f41366\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709090 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts\") pod \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\" (UID: \"0c439a3b-429b-45f7-be39-a4fcbcf904b8\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709120 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mll6f\" (UniqueName: \"kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f\") pod \"25000567-9488-4bd2-8b57-a2b4b1f41366\" (UID: \"25000567-9488-4bd2-8b57-a2b4b1f41366\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709143 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709172 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data\") pod \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\" (UID: \"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be\") " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709674 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c096e9f7-6065-4656-82c3-167bd595c303-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.709700 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rb52\" (UniqueName: \"kubernetes.io/projected/c096e9f7-6065-4656-82c3-167bd595c303-kube-api-access-5rb52\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.713584 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.721354 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw" (OuterVolumeSpecName: "kube-api-access-n7rpw") pod "0c439a3b-429b-45f7-be39-a4fcbcf904b8" (UID: "0c439a3b-429b-45f7-be39-a4fcbcf904b8"). InnerVolumeSpecName "kube-api-access-n7rpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.723732 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c439a3b-429b-45f7-be39-a4fcbcf904b8" (UID: "0c439a3b-429b-45f7-be39-a4fcbcf904b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.728895 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs" (OuterVolumeSpecName: "logs") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.736905 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.739173 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t" (OuterVolumeSpecName: "kube-api-access-5zw4t") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "kube-api-access-5zw4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.739365 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "25000567-9488-4bd2-8b57-a2b4b1f41366" (UID: "25000567-9488-4bd2-8b57-a2b4b1f41366"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.751280 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f" (OuterVolumeSpecName: "kube-api-access-mll6f") pod "25000567-9488-4bd2-8b57-a2b4b1f41366" (UID: "25000567-9488-4bd2-8b57-a2b4b1f41366"). InnerVolumeSpecName "kube-api-access-mll6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.768237 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts" (OuterVolumeSpecName: "scripts") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824458 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824508 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-logs\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824521 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/25000567-9488-4bd2-8b57-a2b4b1f41366-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824559 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c439a3b-429b-45f7-be39-a4fcbcf904b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824571 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mll6f\" (UniqueName: \"kubernetes.io/projected/25000567-9488-4bd2-8b57-a2b4b1f41366-kube-api-access-mll6f\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824582 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824627 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824642 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zw4t\" (UniqueName: \"kubernetes.io/projected/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-kube-api-access-5zw4t\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.824655 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7rpw\" (UniqueName: \"kubernetes.io/projected/0c439a3b-429b-45f7-be39-a4fcbcf904b8-kube-api-access-n7rpw\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.904887 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.949238 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.970237 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 21 10:59:49 crc kubenswrapper[4745]: I0121 10:59:49.977910 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.051692 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.056141 4745 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.056171 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.165662 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data" (OuterVolumeSpecName: "config-data") pod "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" (UID: "3e0bb98b-621c-4941-a2f2-c4e8bb1b60be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.171774 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts\") pod \"867be566-a37c-499e-9d6b-026bbc370fe5\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.172010 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstjb\" (UniqueName: \"kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb\") pod \"867be566-a37c-499e-9d6b-026bbc370fe5\" (UID: \"867be566-a37c-499e-9d6b-026bbc370fe5\") " Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.172761 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "867be566-a37c-499e-9d6b-026bbc370fe5" (UID: "867be566-a37c-499e-9d6b-026bbc370fe5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.173264 4745 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/867be566-a37c-499e-9d6b-026bbc370fe5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.173289 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.181074 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb" (OuterVolumeSpecName: "kube-api-access-zstjb") pod "867be566-a37c-499e-9d6b-026bbc370fe5" (UID: "867be566-a37c-499e-9d6b-026bbc370fe5"). InnerVolumeSpecName "kube-api-access-zstjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.276799 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstjb\" (UniqueName: \"kubernetes.io/projected/867be566-a37c-499e-9d6b-026bbc370fe5-kube-api-access-zstjb\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.352437 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-ea70-account-create-update-9hjhh" event={"ID":"25000567-9488-4bd2-8b57-a2b4b1f41366","Type":"ContainerDied","Data":"6d2eb67b7c854be18ea9eaeb2b7ac1f28b03e828866fb9796c303ee0198a885a"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.352513 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d2eb67b7c854be18ea9eaeb2b7ac1f28b03e828866fb9796c303ee0198a885a" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.352565 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-ea70-account-create-update-9hjhh" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.356427 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.356775 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-d3be-account-create-update-b5s94" event={"ID":"7eaf1233-ea59-4baf-ab46-f24a0b142b80","Type":"ContainerDied","Data":"1cc4726cae8313307419cf5d282feff822203d64dcf32b40cc28698babd0c95b"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.357353 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc4726cae8313307419cf5d282feff822203d64dcf32b40cc28698babd0c95b" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.358519 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hd2t" event={"ID":"c096e9f7-6065-4656-82c3-167bd595c303","Type":"ContainerDied","Data":"a8f7a10ef15e63682d2b41a85c14ade26e579851f6e85b5b141dc9bf8614b511"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.358571 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8f7a10ef15e63682d2b41a85c14ade26e579851f6e85b5b141dc9bf8614b511" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.358627 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hd2t" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.361028 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-zdc2f" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.361409 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-zdc2f" event={"ID":"867be566-a37c-499e-9d6b-026bbc370fe5","Type":"ContainerDied","Data":"fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.361465 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa77d9891d8dee926248eb3bad86a170215cd8c106b4c8101e9decb21dcff8d6" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.365160 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-kqm6s" event={"ID":"0c439a3b-429b-45f7-be39-a4fcbcf904b8","Type":"ContainerDied","Data":"68f7037d47fc4091eea6f84cf29f5ec9e0f8881df3379719e16f77f95b9e11fa"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.365187 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68f7037d47fc4091eea6f84cf29f5ec9e0f8881df3379719e16f77f95b9e11fa" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.365283 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-kqm6s" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.369187 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3e0bb98b-621c-4941-a2f2-c4e8bb1b60be","Type":"ContainerDied","Data":"dba86eae0cc21ac7b0bfd751427f3f52abd6e51c6ffe630bab2a7f8baab9e85c"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.369250 4745 scope.go:117] "RemoveContainer" containerID="f25e04d1c510833a32aa438356534008fa152ac0df553795c6bbfbdfaa3bf8ce" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.369648 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.400920 4745 generic.go:334] "Generic (PLEG): container finished" podID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerID="379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f" exitCode=137 Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.401060 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.401071 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.401162 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerDied","Data":"379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f"} Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.431917 4745 scope.go:117] "RemoveContainer" containerID="8b5d6aff5cd21f1dab9c1e52236e926cbc75280823886bb39f699a251dbe75fe" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.441980 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.453213 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.462412 4745 scope.go:117] "RemoveContainer" containerID="f57ccedb86dad5657f9fdf7c445e2849aacbd47de26c247bb9bde68caa1753ec" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.500959 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501362 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eaf1233-ea59-4baf-ab46-f24a0b142b80" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501378 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eaf1233-ea59-4baf-ab46-f24a0b142b80" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501397 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c439a3b-429b-45f7-be39-a4fcbcf904b8" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501405 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c439a3b-429b-45f7-be39-a4fcbcf904b8" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501418 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c096e9f7-6065-4656-82c3-167bd595c303" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501425 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c096e9f7-6065-4656-82c3-167bd595c303" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501447 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25000567-9488-4bd2-8b57-a2b4b1f41366" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501453 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="25000567-9488-4bd2-8b57-a2b4b1f41366" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501464 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4008d5c8-f775-45b9-bffc-fcbbd41768ba" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501470 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4008d5c8-f775-45b9-bffc-fcbbd41768ba" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501488 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-log" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501494 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-log" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501502 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-httpd" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501509 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-httpd" Jan 21 10:59:50 crc kubenswrapper[4745]: E0121 10:59:50.501521 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="867be566-a37c-499e-9d6b-026bbc370fe5" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501545 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="867be566-a37c-499e-9d6b-026bbc370fe5" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501734 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-log" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501747 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" containerName="glance-httpd" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501756 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c439a3b-429b-45f7-be39-a4fcbcf904b8" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501767 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="25000567-9488-4bd2-8b57-a2b4b1f41366" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501791 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eaf1233-ea59-4baf-ab46-f24a0b142b80" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501803 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c096e9f7-6065-4656-82c3-167bd595c303" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501811 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="867be566-a37c-499e-9d6b-026bbc370fe5" containerName="mariadb-database-create" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.501821 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4008d5c8-f775-45b9-bffc-fcbbd41768ba" containerName="mariadb-account-create-update" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.512915 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.517145 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.517570 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.576036 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.692862 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693014 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693130 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-logs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693172 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4scj\" (UniqueName: \"kubernetes.io/projected/e539d73b-d00d-45c7-967a-e084d68a78a5-kube-api-access-q4scj\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693228 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693253 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693279 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.693311 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.798720 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-logs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.798797 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4scj\" (UniqueName: \"kubernetes.io/projected/e539d73b-d00d-45c7-967a-e084d68a78a5-kube-api-access-q4scj\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.798934 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.798959 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.798986 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.799038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.799067 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.799139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.799351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-logs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.800055 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.802758 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e539d73b-d00d-45c7-967a-e084d68a78a5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.811557 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-config-data\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.826203 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-scripts\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.835611 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.844631 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4scj\" (UniqueName: \"kubernetes.io/projected/e539d73b-d00d-45c7-967a-e084d68a78a5-kube-api-access-q4scj\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.866746 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:50 crc kubenswrapper[4745]: I0121 10:59:50.933963 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e539d73b-d00d-45c7-967a-e084d68a78a5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e539d73b-d00d-45c7-967a-e084d68a78a5\") " pod="openstack/glance-default-external-api-0" Jan 21 10:59:51 crc kubenswrapper[4745]: I0121 10:59:51.153338 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 21 10:59:51 crc kubenswrapper[4745]: I0121 10:59:51.447548 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"8f5f7bc01ddb73a6c9d98f33675a22c566c1c30950639e4f4a2083eabc92ed40"} Jan 21 10:59:51 crc kubenswrapper[4745]: I0121 10:59:51.945314 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 21 10:59:52 crc kubenswrapper[4745]: I0121 10:59:52.013321 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0bb98b-621c-4941-a2f2-c4e8bb1b60be" path="/var/lib/kubelet/pods/3e0bb98b-621c-4941-a2f2-c4e8bb1b60be/volumes" Jan 21 10:59:52 crc kubenswrapper[4745]: I0121 10:59:52.521241 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e539d73b-d00d-45c7-967a-e084d68a78a5","Type":"ContainerStarted","Data":"4f62d5fddb25902a58a1a184cbca55569c405f04766e65d54a732c4cb593cff7"} Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.288165 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.371805 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data\") pod \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.371941 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle\") pod \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.371987 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom\") pod \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.372027 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74pz6\" (UniqueName: \"kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6\") pod \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\" (UID: \"bf1d009d-bd84-435d-aeb4-8bf435eeea50\") " Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.385715 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bf1d009d-bd84-435d-aeb4-8bf435eeea50" (UID: "bf1d009d-bd84-435d-aeb4-8bf435eeea50"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.386942 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6" (OuterVolumeSpecName: "kube-api-access-74pz6") pod "bf1d009d-bd84-435d-aeb4-8bf435eeea50" (UID: "bf1d009d-bd84-435d-aeb4-8bf435eeea50"). InnerVolumeSpecName "kube-api-access-74pz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.423159 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf1d009d-bd84-435d-aeb4-8bf435eeea50" (UID: "bf1d009d-bd84-435d-aeb4-8bf435eeea50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.461678 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data" (OuterVolumeSpecName: "config-data") pod "bf1d009d-bd84-435d-aeb4-8bf435eeea50" (UID: "bf1d009d-bd84-435d-aeb4-8bf435eeea50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.473860 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.473894 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.473910 4745 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bf1d009d-bd84-435d-aeb4-8bf435eeea50-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.473920 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74pz6\" (UniqueName: \"kubernetes.io/projected/bf1d009d-bd84-435d-aeb4-8bf435eeea50-kube-api-access-74pz6\") on node \"crc\" DevicePath \"\"" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.550311 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e539d73b-d00d-45c7-967a-e084d68a78a5","Type":"ContainerStarted","Data":"5463ab1cbbec80095080305c4043e3baed43425d4fe26251ea06bc7a6bf068bd"} Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.559763 4745 generic.go:334] "Generic (PLEG): container finished" podID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" exitCode=0 Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.559823 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58b4779467-f9wqf" event={"ID":"bf1d009d-bd84-435d-aeb4-8bf435eeea50","Type":"ContainerDied","Data":"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee"} Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.559853 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-58b4779467-f9wqf" event={"ID":"bf1d009d-bd84-435d-aeb4-8bf435eeea50","Type":"ContainerDied","Data":"97c54e8780fd5d0b5ce873cfda79cd1eccbf67f168d974e81ff108c5138578e2"} Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.559875 4745 scope.go:117] "RemoveContainer" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.560096 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-58b4779467-f9wqf" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.598725 4745 scope.go:117] "RemoveContainer" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" Jan 21 10:59:53 crc kubenswrapper[4745]: E0121 10:59:53.604803 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee\": container with ID starting with e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee not found: ID does not exist" containerID="e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.604865 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee"} err="failed to get container status \"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee\": rpc error: code = NotFound desc = could not find container \"e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee\": container with ID starting with e12a66084853cccb3a8216da74b941bbc805c0108427571ab260289d33096aee not found: ID does not exist" Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.624579 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:53 crc kubenswrapper[4745]: I0121 10:59:53.636576 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-58b4779467-f9wqf"] Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.018899 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" path="/var/lib/kubelet/pods/bf1d009d-bd84-435d-aeb4-8bf435eeea50/volumes" Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.040474 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.040602 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.042470 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.375891 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.573162 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e539d73b-d00d-45c7-967a-e084d68a78a5","Type":"ContainerStarted","Data":"19e382e80fff608d70c894f84d3282236fc81628e6afa82fa134459da80558c3"} Jan 21 10:59:54 crc kubenswrapper[4745]: I0121 10:59:54.594813 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.594780722 podStartE2EDuration="4.594780722s" podCreationTimestamp="2026-01-21 10:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 10:59:54.592899391 +0000 UTC m=+1379.053686989" watchObservedRunningTime="2026-01-21 10:59:54.594780722 +0000 UTC m=+1379.055568320" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.009459 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhjnr"] Jan 21 10:59:55 crc kubenswrapper[4745]: E0121 10:59:55.010173 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.010189 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.010381 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1d009d-bd84-435d-aeb4-8bf435eeea50" containerName="heat-engine" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.011055 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.014365 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.014490 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.016958 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8bnth" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.024741 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhjnr"] Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.130753 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxt2\" (UniqueName: \"kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.130855 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.131019 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.131112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.233389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.233453 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.233565 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxt2\" (UniqueName: \"kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.233614 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.241442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.243341 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.243351 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.264230 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxt2\" (UniqueName: \"kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2\") pod \"nova-cell0-conductor-db-sync-nhjnr\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.334894 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 10:59:55 crc kubenswrapper[4745]: W0121 10:59:55.879987 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6d39082c_f9aa_4e16_a704_487ab278344c.slice/crio-7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb WatchSource:0}: Error finding container 7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb: Status 404 returned error can't find the container with id 7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb Jan 21 10:59:55 crc kubenswrapper[4745]: I0121 10:59:55.884828 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhjnr"] Jan 21 10:59:56 crc kubenswrapper[4745]: I0121 10:59:56.601674 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" event={"ID":"6d39082c-f9aa-4e16-a704-487ab278344c","Type":"ContainerStarted","Data":"7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb"} Jan 21 10:59:59 crc kubenswrapper[4745]: E0121 10:59:59.223272 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 21 10:59:59 crc kubenswrapper[4745]: E0121 10:59:59.223825 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ps64w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70): ErrImagePull: initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)" logger="UnhandledError" Jan 21 10:59:59 crc kubenswrapper[4745]: E0121 10:59:59.225071 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"initializing source docker://registry.redhat.io/ubi9/httpd-24:latest: Requesting bearer token: invalid status code from registry 504 (Gateway Timeout)\"" pod="openstack/ceilometer-0" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" Jan 21 10:59:59 crc kubenswrapper[4745]: I0121 10:59:59.637471 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="sg-core" containerID="cri-o://15d5501202ef746503c44af40a1d371da51f31c0d8eadb13bdc485ee08acfb00" gracePeriod=30 Jan 21 10:59:59 crc kubenswrapper[4745]: I0121 10:59:59.637496 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-notification-agent" containerID="cri-o://e7de4d1251266c611e2b74fab0b6d66e8dd3496e8af2cab45005be02b309b10e" gracePeriod=30 Jan 21 10:59:59 crc kubenswrapper[4745]: I0121 10:59:59.637947 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-central-agent" containerID="cri-o://252ce9c30a02708cbdae0f9bce6025c88e292fb590512fb06748537bd320e112" gracePeriod=30 Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.029572 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.029625 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.031575 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.147403 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q"] Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.153588 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.155732 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.155915 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.163760 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q"] Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.247255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.247328 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlhf\" (UniqueName: \"kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.247412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.349916 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlhf\" (UniqueName: \"kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.350081 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.350277 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.351221 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.356005 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.367821 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlhf\" (UniqueName: \"kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf\") pod \"collect-profiles-29483220-wzj8q\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.492511 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.669897 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d2746d8-86a1-412c-8cac-b737fff90886" containerID="3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f" exitCode=137 Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.669973 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerDied","Data":"3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f"} Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.670008 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerStarted","Data":"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3"} Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.670030 4745 scope.go:117] "RemoveContainer" containerID="db044202ae0063faeb02cf75ac50f68010a4372bb2bd84a035565822361bf906" Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.678320 4745 generic.go:334] "Generic (PLEG): container finished" podID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerID="15d5501202ef746503c44af40a1d371da51f31c0d8eadb13bdc485ee08acfb00" exitCode=2 Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.678576 4745 generic.go:334] "Generic (PLEG): container finished" podID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerID="e7de4d1251266c611e2b74fab0b6d66e8dd3496e8af2cab45005be02b309b10e" exitCode=0 Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.678627 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerDied","Data":"15d5501202ef746503c44af40a1d371da51f31c0d8eadb13bdc485ee08acfb00"} Jan 21 11:00:00 crc kubenswrapper[4745]: I0121 11:00:00.678778 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerDied","Data":"e7de4d1251266c611e2b74fab0b6d66e8dd3496e8af2cab45005be02b309b10e"} Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.155295 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.155344 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.199747 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.260071 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.688772 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:00:01 crc kubenswrapper[4745]: I0121 11:00:01.688817 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 21 11:00:03 crc kubenswrapper[4745]: I0121 11:00:03.847685 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:00:03 crc kubenswrapper[4745]: I0121 11:00:03.848124 4745 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 21 11:00:03 crc kubenswrapper[4745]: I0121 11:00:03.858414 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 21 11:00:09 crc kubenswrapper[4745]: I0121 11:00:09.710511 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:00:09 crc kubenswrapper[4745]: I0121 11:00:09.711285 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:00:09 crc kubenswrapper[4745]: I0121 11:00:09.860867 4745 generic.go:334] "Generic (PLEG): container finished" podID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerID="252ce9c30a02708cbdae0f9bce6025c88e292fb590512fb06748537bd320e112" exitCode=0 Jan 21 11:00:09 crc kubenswrapper[4745]: I0121 11:00:09.860910 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerDied","Data":"252ce9c30a02708cbdae0f9bce6025c88e292fb590512fb06748537bd320e112"} Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.001078 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.030158 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 11:00:10 crc kubenswrapper[4745]: W0121 11:00:10.152458 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10308ebf_7e98_40cf_ae85_cdda215f5849.slice/crio-50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6 WatchSource:0}: Error finding container 50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6: Status 404 returned error can't find the container with id 50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6 Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.153895 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q"] Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.174744 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.174864 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.175016 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.175054 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps64w\" (UniqueName: \"kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.175093 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.175169 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.175206 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml\") pod \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\" (UID: \"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70\") " Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.178437 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.178830 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.184691 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts" (OuterVolumeSpecName: "scripts") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.186966 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w" (OuterVolumeSpecName: "kube-api-access-ps64w") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "kube-api-access-ps64w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.213728 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.234785 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data" (OuterVolumeSpecName: "config-data") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.241776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" (UID: "1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277609 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277648 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277664 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277678 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps64w\" (UniqueName: \"kubernetes.io/projected/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-kube-api-access-ps64w\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277694 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277707 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.277718 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.876091 4745 generic.go:334] "Generic (PLEG): container finished" podID="10308ebf-7e98-40cf-ae85-cdda215f5849" containerID="d2e300901e122d4bda836981957b1d222167b06cb3652e09c35113d6087f4a65" exitCode=0 Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.876201 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" event={"ID":"10308ebf-7e98-40cf-ae85-cdda215f5849","Type":"ContainerDied","Data":"d2e300901e122d4bda836981957b1d222167b06cb3652e09c35113d6087f4a65"} Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.876696 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" event={"ID":"10308ebf-7e98-40cf-ae85-cdda215f5849","Type":"ContainerStarted","Data":"50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6"} Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.883658 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.883662 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70","Type":"ContainerDied","Data":"b03ded853711c85ba1e881681420816ba5e5a88a2a7ec176b4b3e8c536d4925e"} Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.883722 4745 scope.go:117] "RemoveContainer" containerID="15d5501202ef746503c44af40a1d371da51f31c0d8eadb13bdc485ee08acfb00" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.888752 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" event={"ID":"6d39082c-f9aa-4e16-a704-487ab278344c","Type":"ContainerStarted","Data":"39d186d40a9c15581d1b984e888206a008f06219ad9df9c09e5e0dee19a2a4f1"} Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.922147 4745 scope.go:117] "RemoveContainer" containerID="e7de4d1251266c611e2b74fab0b6d66e8dd3496e8af2cab45005be02b309b10e" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.947157 4745 scope.go:117] "RemoveContainer" containerID="252ce9c30a02708cbdae0f9bce6025c88e292fb590512fb06748537bd320e112" Jan 21 11:00:10 crc kubenswrapper[4745]: I0121 11:00:10.990462 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" podStartSLOduration=3.340357663 podStartE2EDuration="16.990436373s" podCreationTimestamp="2026-01-21 10:59:54 +0000 UTC" firstStartedPulling="2026-01-21 10:59:55.88675246 +0000 UTC m=+1380.347540058" lastFinishedPulling="2026-01-21 11:00:09.53683117 +0000 UTC m=+1393.997618768" observedRunningTime="2026-01-21 11:00:10.940346908 +0000 UTC m=+1395.401134506" watchObservedRunningTime="2026-01-21 11:00:10.990436373 +0000 UTC m=+1395.451223971" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.021002 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.038852 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.069055 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:11 crc kubenswrapper[4745]: E0121 11:00:11.069483 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-central-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.069501 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-central-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: E0121 11:00:11.082621 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-notification-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.082655 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-notification-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: E0121 11:00:11.082673 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="sg-core" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.082682 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="sg-core" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.083017 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-notification-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.083044 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="sg-core" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.083053 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" containerName="ceilometer-central-agent" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.084707 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.088681 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.093838 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199742 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4285\" (UniqueName: \"kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199818 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199843 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199860 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199906 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.199924 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.216870 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302009 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302067 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302101 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302178 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4285\" (UniqueName: \"kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302265 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302296 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.302321 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.303361 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.303456 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.314795 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.314888 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.316836 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.329060 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4285\" (UniqueName: \"kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.337473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.421724 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:11 crc kubenswrapper[4745]: I0121 11:00:11.953553 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.024829 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70" path="/var/lib/kubelet/pods/1f2e0a6b-c7e8-44c5-b4f9-e3d843dead70/volumes" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.242142 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.324268 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume\") pod \"10308ebf-7e98-40cf-ae85-cdda215f5849\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.324370 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlhf\" (UniqueName: \"kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf\") pod \"10308ebf-7e98-40cf-ae85-cdda215f5849\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.324552 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume\") pod \"10308ebf-7e98-40cf-ae85-cdda215f5849\" (UID: \"10308ebf-7e98-40cf-ae85-cdda215f5849\") " Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.326378 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume" (OuterVolumeSpecName: "config-volume") pod "10308ebf-7e98-40cf-ae85-cdda215f5849" (UID: "10308ebf-7e98-40cf-ae85-cdda215f5849"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.334655 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf" (OuterVolumeSpecName: "kube-api-access-9xlhf") pod "10308ebf-7e98-40cf-ae85-cdda215f5849" (UID: "10308ebf-7e98-40cf-ae85-cdda215f5849"). InnerVolumeSpecName "kube-api-access-9xlhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.337417 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "10308ebf-7e98-40cf-ae85-cdda215f5849" (UID: "10308ebf-7e98-40cf-ae85-cdda215f5849"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.426244 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlhf\" (UniqueName: \"kubernetes.io/projected/10308ebf-7e98-40cf-ae85-cdda215f5849-kube-api-access-9xlhf\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.426700 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10308ebf-7e98-40cf-ae85-cdda215f5849-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.426725 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/10308ebf-7e98-40cf-ae85-cdda215f5849-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.911460 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" event={"ID":"10308ebf-7e98-40cf-ae85-cdda215f5849","Type":"ContainerDied","Data":"50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6"} Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.911507 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50b01bcf03052bd6964fafff671623959e85400644663f2aa0cfd8806f1bf9e6" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.911504 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q" Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.913501 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerStarted","Data":"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6"} Jan 21 11:00:12 crc kubenswrapper[4745]: I0121 11:00:12.913547 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerStarted","Data":"6741a6a3bab381c290f41f69c252dddeccc0f9e588844cca35415db3c5e43b07"} Jan 21 11:00:13 crc kubenswrapper[4745]: I0121 11:00:13.923587 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerStarted","Data":"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228"} Jan 21 11:00:14 crc kubenswrapper[4745]: I0121 11:00:14.731013 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:14 crc kubenswrapper[4745]: I0121 11:00:14.965622 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerStarted","Data":"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1"} Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.992258 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerStarted","Data":"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9"} Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.992423 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-central-agent" containerID="cri-o://0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6" gracePeriod=30 Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.992502 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="proxy-httpd" containerID="cri-o://2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9" gracePeriod=30 Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.992613 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="sg-core" containerID="cri-o://7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1" gracePeriod=30 Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.992575 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-notification-agent" containerID="cri-o://1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228" gracePeriod=30 Jan 21 11:00:17 crc kubenswrapper[4745]: I0121 11:00:17.993049 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:00:18 crc kubenswrapper[4745]: I0121 11:00:18.030104 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.267220546 podStartE2EDuration="7.030078313s" podCreationTimestamp="2026-01-21 11:00:11 +0000 UTC" firstStartedPulling="2026-01-21 11:00:11.960278419 +0000 UTC m=+1396.421066017" lastFinishedPulling="2026-01-21 11:00:17.723136186 +0000 UTC m=+1402.183923784" observedRunningTime="2026-01-21 11:00:18.023829833 +0000 UTC m=+1402.484617431" watchObservedRunningTime="2026-01-21 11:00:18.030078313 +0000 UTC m=+1402.490865911" Jan 21 11:00:19 crc kubenswrapper[4745]: I0121 11:00:19.004731 4745 generic.go:334] "Generic (PLEG): container finished" podID="c1f98a54-6f3d-4171-8389-507c99701317" containerID="7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1" exitCode=2 Jan 21 11:00:19 crc kubenswrapper[4745]: I0121 11:00:19.004764 4745 generic.go:334] "Generic (PLEG): container finished" podID="c1f98a54-6f3d-4171-8389-507c99701317" containerID="1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228" exitCode=0 Jan 21 11:00:19 crc kubenswrapper[4745]: I0121 11:00:19.004784 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerDied","Data":"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1"} Jan 21 11:00:19 crc kubenswrapper[4745]: I0121 11:00:19.004804 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerDied","Data":"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228"} Jan 21 11:00:19 crc kubenswrapper[4745]: I0121 11:00:19.713643 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 11:00:20 crc kubenswrapper[4745]: I0121 11:00:20.031070 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 11:00:20 crc kubenswrapper[4745]: I0121 11:00:20.031140 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:00:20 crc kubenswrapper[4745]: I0121 11:00:20.031799 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8f5f7bc01ddb73a6c9d98f33675a22c566c1c30950639e4f4a2083eabc92ed40"} pod="openstack/horizon-5cdbfc4d4d-pm6ln" containerMessage="Container horizon failed startup probe, will be restarted" Jan 21 11:00:20 crc kubenswrapper[4745]: I0121 11:00:20.031829 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" containerID="cri-o://8f5f7bc01ddb73a6c9d98f33675a22c566c1c30950639e4f4a2083eabc92ed40" gracePeriod=30 Jan 21 11:00:24 crc kubenswrapper[4745]: I0121 11:00:24.136195 4745 generic.go:334] "Generic (PLEG): container finished" podID="c1f98a54-6f3d-4171-8389-507c99701317" containerID="0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6" exitCode=0 Jan 21 11:00:24 crc kubenswrapper[4745]: I0121 11:00:24.136326 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerDied","Data":"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6"} Jan 21 11:00:27 crc kubenswrapper[4745]: I0121 11:00:27.166550 4745 generic.go:334] "Generic (PLEG): container finished" podID="6d39082c-f9aa-4e16-a704-487ab278344c" containerID="39d186d40a9c15581d1b984e888206a008f06219ad9df9c09e5e0dee19a2a4f1" exitCode=0 Jan 21 11:00:27 crc kubenswrapper[4745]: I0121 11:00:27.166610 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" event={"ID":"6d39082c-f9aa-4e16-a704-487ab278344c","Type":"ContainerDied","Data":"39d186d40a9c15581d1b984e888206a008f06219ad9df9c09e5e0dee19a2a4f1"} Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.639213 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.737288 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data\") pod \"6d39082c-f9aa-4e16-a704-487ab278344c\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.737506 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle\") pod \"6d39082c-f9aa-4e16-a704-487ab278344c\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.737568 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pxt2\" (UniqueName: \"kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2\") pod \"6d39082c-f9aa-4e16-a704-487ab278344c\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.737614 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts\") pod \"6d39082c-f9aa-4e16-a704-487ab278344c\" (UID: \"6d39082c-f9aa-4e16-a704-487ab278344c\") " Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.747293 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2" (OuterVolumeSpecName: "kube-api-access-8pxt2") pod "6d39082c-f9aa-4e16-a704-487ab278344c" (UID: "6d39082c-f9aa-4e16-a704-487ab278344c"). InnerVolumeSpecName "kube-api-access-8pxt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.750675 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts" (OuterVolumeSpecName: "scripts") pod "6d39082c-f9aa-4e16-a704-487ab278344c" (UID: "6d39082c-f9aa-4e16-a704-487ab278344c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.784292 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6d39082c-f9aa-4e16-a704-487ab278344c" (UID: "6d39082c-f9aa-4e16-a704-487ab278344c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.815836 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data" (OuterVolumeSpecName: "config-data") pod "6d39082c-f9aa-4e16-a704-487ab278344c" (UID: "6d39082c-f9aa-4e16-a704-487ab278344c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.839476 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.839510 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.839519 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d39082c-f9aa-4e16-a704-487ab278344c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:28 crc kubenswrapper[4745]: I0121 11:00:28.839561 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pxt2\" (UniqueName: \"kubernetes.io/projected/6d39082c-f9aa-4e16-a704-487ab278344c-kube-api-access-8pxt2\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.185692 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" event={"ID":"6d39082c-f9aa-4e16-a704-487ab278344c","Type":"ContainerDied","Data":"7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb"} Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.185739 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a17bd29d7824f0961c6627418133212cc2e7e9d5780fe0f63623aa0dd4dadfb" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.185831 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-nhjnr" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.325354 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:00:29 crc kubenswrapper[4745]: E0121 11:00:29.325831 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d39082c-f9aa-4e16-a704-487ab278344c" containerName="nova-cell0-conductor-db-sync" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.325858 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d39082c-f9aa-4e16-a704-487ab278344c" containerName="nova-cell0-conductor-db-sync" Jan 21 11:00:29 crc kubenswrapper[4745]: E0121 11:00:29.325882 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10308ebf-7e98-40cf-ae85-cdda215f5849" containerName="collect-profiles" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.325894 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="10308ebf-7e98-40cf-ae85-cdda215f5849" containerName="collect-profiles" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.326123 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d39082c-f9aa-4e16-a704-487ab278344c" containerName="nova-cell0-conductor-db-sync" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.326161 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="10308ebf-7e98-40cf-ae85-cdda215f5849" containerName="collect-profiles" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.326942 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.335443 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.339863 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-8bnth" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.340043 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.356245 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.356329 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.356420 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dnr\" (UniqueName: \"kubernetes.io/projected/f83e31f6-d723-448d-9b2d-dbb7c7d23447-kube-api-access-k6dnr\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.458426 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6dnr\" (UniqueName: \"kubernetes.io/projected/f83e31f6-d723-448d-9b2d-dbb7c7d23447-kube-api-access-k6dnr\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.458585 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.458670 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.464372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.466289 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f83e31f6-d723-448d-9b2d-dbb7c7d23447-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.480443 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6dnr\" (UniqueName: \"kubernetes.io/projected/f83e31f6-d723-448d-9b2d-dbb7c7d23447-kube-api-access-k6dnr\") pod \"nova-cell0-conductor-0\" (UID: \"f83e31f6-d723-448d-9b2d-dbb7c7d23447\") " pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:29 crc kubenswrapper[4745]: I0121 11:00:29.665881 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:30 crc kubenswrapper[4745]: I0121 11:00:30.174099 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 21 11:00:30 crc kubenswrapper[4745]: I0121 11:00:30.201752 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f83e31f6-d723-448d-9b2d-dbb7c7d23447","Type":"ContainerStarted","Data":"26534184696a21de28c637168a1850c61f3fead91285bf24c6ed7d2978d474d6"} Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.213870 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f83e31f6-d723-448d-9b2d-dbb7c7d23447","Type":"ContainerStarted","Data":"f61fe7c81de4e6f272d4a5a1682f64d19c6023020661d3f9c21303e6a6b072eb"} Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.215229 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.235816 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.235798452 podStartE2EDuration="2.235798452s" podCreationTimestamp="2026-01-21 11:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:31.231103664 +0000 UTC m=+1415.691891262" watchObservedRunningTime="2026-01-21 11:00:31.235798452 +0000 UTC m=+1415.696586050" Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.808863 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.811097 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.825335 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.903461 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.903855 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zs6d\" (UniqueName: \"kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:31 crc kubenswrapper[4745]: I0121 11:00:31.903966 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.005227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.005309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zs6d\" (UniqueName: \"kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.005340 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.005940 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.005966 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.047578 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zs6d\" (UniqueName: \"kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d\") pod \"redhat-operators-w27tt\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.190620 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:32 crc kubenswrapper[4745]: I0121 11:00:32.930652 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:00:33 crc kubenswrapper[4745]: I0121 11:00:33.248277 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerID="1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9" exitCode=0 Jan 21 11:00:33 crc kubenswrapper[4745]: I0121 11:00:33.249563 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerDied","Data":"1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9"} Jan 21 11:00:33 crc kubenswrapper[4745]: I0121 11:00:33.249715 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerStarted","Data":"98e8c5ba073a8efd3856bae9a4fdcff6bc414e1d1b559a31990dcc0eefd9cf36"} Jan 21 11:00:33 crc kubenswrapper[4745]: I0121 11:00:33.778986 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:00:35 crc kubenswrapper[4745]: I0121 11:00:35.267707 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerStarted","Data":"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1"} Jan 21 11:00:36 crc kubenswrapper[4745]: I0121 11:00:36.511585 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:00:39 crc kubenswrapper[4745]: I0121 11:00:39.698488 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.312972 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerID="ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1" exitCode=0 Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.313024 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerDied","Data":"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1"} Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.435908 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-dnktc"] Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.437999 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.443121 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.467277 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dnktc"] Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.472842 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.482584 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwdlr\" (UniqueName: \"kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.483271 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.483498 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.483548 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.584677 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.584728 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.584781 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwdlr\" (UniqueName: \"kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.584816 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.599095 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.617451 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.620303 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.656435 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwdlr\" (UniqueName: \"kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr\") pod \"nova-cell0-cell-mapping-dnktc\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.768928 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.945113 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.953132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:00:40 crc kubenswrapper[4745]: I0121 11:00:40.956184 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.001497 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.001831 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.001871 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.001923 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l5hg\" (UniqueName: \"kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.027802 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.104349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.104428 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.104585 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.104648 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l5hg\" (UniqueName: \"kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.106129 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.127851 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.134251 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.218161 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l5hg\" (UniqueName: \"kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg\") pod \"nova-api-0\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.258750 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.260336 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.269878 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:00:41 crc kubenswrapper[4745]: I0121 11:00:41.276902 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.310580 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.310683 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.310723 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.310829 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqx7\" (UniqueName: \"kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.371599 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.413985 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqx7\" (UniqueName: \"kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.414071 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.414128 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.414157 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.414852 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.424097 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.447221 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.469400 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.479567 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.484075 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqx7\" (UniqueName: \"kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7\") pod \"nova-metadata-0\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.491511 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.522834 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.522897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.522928 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8zfx\" (UniqueName: \"kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.526081 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.539333 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.544579 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.552505 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.554704 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.579565 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.593881 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.614442 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.618324 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641031 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8zfx\" (UniqueName: \"kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x84pg\" (UniqueName: \"kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641499 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641582 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641609 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.641661 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.720841 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8zfx\" (UniqueName: \"kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.724143 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.731060 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.732003 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743418 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x84pg\" (UniqueName: \"kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743492 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743518 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743556 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5t7\" (UniqueName: \"kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743584 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743601 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743618 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743697 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.743737 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.749275 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.775423 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.813101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x84pg\" (UniqueName: \"kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg\") pod \"nova-cell1-novncproxy-0\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.846997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.847098 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.847255 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.847281 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.847336 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5t7\" (UniqueName: \"kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.847368 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.848885 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.849483 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.850025 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.850688 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.851160 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.873243 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5t7\" (UniqueName: \"kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7\") pod \"dnsmasq-dns-9b86998b5-pbrww\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.882255 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.894741 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:41.898259 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:42.362872 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerStarted","Data":"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48"} Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:42.407901 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w27tt" podStartSLOduration=3.8880095949999998 podStartE2EDuration="11.407877705s" podCreationTimestamp="2026-01-21 11:00:31 +0000 UTC" firstStartedPulling="2026-01-21 11:00:33.250746786 +0000 UTC m=+1417.711534384" lastFinishedPulling="2026-01-21 11:00:40.770614896 +0000 UTC m=+1425.231402494" observedRunningTime="2026-01-21 11:00:42.393444922 +0000 UTC m=+1426.854232520" watchObservedRunningTime="2026-01-21 11:00:42.407877705 +0000 UTC m=+1426.868665303" Jan 21 11:00:42 crc kubenswrapper[4745]: I0121 11:00:42.626996 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-dnktc"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.060414 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.113080 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.158342 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:00:43 crc kubenswrapper[4745]: W0121 11:00:43.169370 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41dde358_5a20_4b61_bb73_7a73962de599.slice/crio-0f8d411bf6f1222f723e097f11d37a598fc669643b67a5bcdb26a6b8b6ab3f85 WatchSource:0}: Error finding container 0f8d411bf6f1222f723e097f11d37a598fc669643b67a5bcdb26a6b8b6ab3f85: Status 404 returned error can't find the container with id 0f8d411bf6f1222f723e097f11d37a598fc669643b67a5bcdb26a6b8b6ab3f85 Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.182576 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.200021 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.270362 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrtlg"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.272316 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.278312 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.278421 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.297081 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrtlg"] Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.396001 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41dde358-5a20-4b61-bb73-7a73962de599","Type":"ContainerStarted","Data":"0f8d411bf6f1222f723e097f11d37a598fc669643b67a5bcdb26a6b8b6ab3f85"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.398379 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.398443 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9ws\" (UniqueName: \"kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.398624 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.398681 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.401085 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cac032-5f94-44ed-86e0-516ccb45d6d6","Type":"ContainerStarted","Data":"99d3ea570180ed5ecdc33bb30cfafc746ba671eaf0cce6f91c40fa286ac5d912"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.402292 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerStarted","Data":"849d6f9df3e6d36d869697f23c7d482647be7b4083f2d03b6831c2d4efc06d2b"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.403119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" event={"ID":"8be62d87-2c41-42a9-8327-ca29301a4361","Type":"ContainerStarted","Data":"22ab24047920903a02d5b6c5f0d79593be7e77a9585c34f69cb6cd6ad635ab43"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.404183 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dnktc" event={"ID":"1d06e6bd-564b-441c-8672-3c170053407d","Type":"ContainerStarted","Data":"1e64c8b49c4937475f2f4a6885064c0c5678a1800a000cab12c53d4ba828cf95"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.404217 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dnktc" event={"ID":"1d06e6bd-564b-441c-8672-3c170053407d","Type":"ContainerStarted","Data":"4eee74e17a312fffad39b629eb84e3f9e3dfaff9bdbb2897eeab7e781b424bd4"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.407077 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerStarted","Data":"b69072ac3de261cde4fc70a97ade80a2c5740440d6d336ed3526ae098c22188f"} Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.454011 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-dnktc" podStartSLOduration=3.45398726 podStartE2EDuration="3.45398726s" podCreationTimestamp="2026-01-21 11:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:43.432979638 +0000 UTC m=+1427.893767246" watchObservedRunningTime="2026-01-21 11:00:43.45398726 +0000 UTC m=+1427.914774858" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.500679 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.500788 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.500917 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.500975 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9ws\" (UniqueName: \"kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.507473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.509108 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.509372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.525849 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9ws\" (UniqueName: \"kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws\") pod \"nova-cell1-conductor-db-sync-nrtlg\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:43 crc kubenswrapper[4745]: I0121 11:00:43.633203 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:00:44 crc kubenswrapper[4745]: I0121 11:00:44.390325 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrtlg"] Jan 21 11:00:44 crc kubenswrapper[4745]: I0121 11:00:44.468933 4745 generic.go:334] "Generic (PLEG): container finished" podID="8be62d87-2c41-42a9-8327-ca29301a4361" containerID="f8a647107f4c2d5c6236275f6144c3c7a97c133fb9ff8b8a9cd48dd93dd960ff" exitCode=0 Jan 21 11:00:44 crc kubenswrapper[4745]: I0121 11:00:44.470329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" event={"ID":"8be62d87-2c41-42a9-8327-ca29301a4361","Type":"ContainerDied","Data":"f8a647107f4c2d5c6236275f6144c3c7a97c133fb9ff8b8a9cd48dd93dd960ff"} Jan 21 11:00:44 crc kubenswrapper[4745]: I0121 11:00:44.718706 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:00:44 crc kubenswrapper[4745]: I0121 11:00:44.718724 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.490360 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" event={"ID":"8be62d87-2c41-42a9-8327-ca29301a4361","Type":"ContainerStarted","Data":"6b72a73e9bcff2596bc36085654c662c10f720d6a67844c34a8d84713cabf081"} Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.490799 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.497016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" event={"ID":"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c","Type":"ContainerStarted","Data":"f5ac36da53d52a33cac774a3f877b28b621528750d100a68c55e9ede4f809ba4"} Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.497084 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" event={"ID":"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c","Type":"ContainerStarted","Data":"e0734b22afff7fbd72f7b6049821244f2b08d2387a8949c72dfd7abd4e248ad1"} Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.521432 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" podStartSLOduration=4.521412925 podStartE2EDuration="4.521412925s" podCreationTimestamp="2026-01-21 11:00:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:45.513426238 +0000 UTC m=+1429.974213836" watchObservedRunningTime="2026-01-21 11:00:45.521412925 +0000 UTC m=+1429.982200533" Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.560572 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" podStartSLOduration=2.560548412 podStartE2EDuration="2.560548412s" podCreationTimestamp="2026-01-21 11:00:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:00:45.536400254 +0000 UTC m=+1429.997187852" watchObservedRunningTime="2026-01-21 11:00:45.560548412 +0000 UTC m=+1430.021336010" Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.868554 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.868605 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.924848 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:00:45 crc kubenswrapper[4745]: I0121 11:00:45.943947 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.629599 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.732543 4745 generic.go:334] "Generic (PLEG): container finished" podID="c1f98a54-6f3d-4171-8389-507c99701317" containerID="2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9" exitCode=137 Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.732632 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerDied","Data":"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9"} Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.732666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1f98a54-6f3d-4171-8389-507c99701317","Type":"ContainerDied","Data":"6741a6a3bab381c290f41f69c252dddeccc0f9e588844cca35415db3c5e43b07"} Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.732709 4745 scope.go:117] "RemoveContainer" containerID="2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.732926 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735407 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4285\" (UniqueName: \"kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735501 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735646 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735714 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735760 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735815 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.735845 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data\") pod \"c1f98a54-6f3d-4171-8389-507c99701317\" (UID: \"c1f98a54-6f3d-4171-8389-507c99701317\") " Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.736776 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.736908 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.737962 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.737988 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1f98a54-6f3d-4171-8389-507c99701317-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.754717 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts" (OuterVolumeSpecName: "scripts") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.791101 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285" (OuterVolumeSpecName: "kube-api-access-f4285") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "kube-api-access-f4285". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.814782 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.840653 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4285\" (UniqueName: \"kubernetes.io/projected/c1f98a54-6f3d-4171-8389-507c99701317-kube-api-access-f4285\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.840688 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.840697 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.908219 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.942732 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:48 crc kubenswrapper[4745]: I0121 11:00:48.948841 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data" (OuterVolumeSpecName: "config-data") pod "c1f98a54-6f3d-4171-8389-507c99701317" (UID: "c1f98a54-6f3d-4171-8389-507c99701317"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.044112 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1f98a54-6f3d-4171-8389-507c99701317-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.127749 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.150017 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.408805 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:49 crc kubenswrapper[4745]: E0121 11:00:49.409355 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-notification-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409383 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-notification-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: E0121 11:00:49.409408 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="proxy-httpd" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409415 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="proxy-httpd" Jan 21 11:00:49 crc kubenswrapper[4745]: E0121 11:00:49.409429 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-central-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409435 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-central-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: E0121 11:00:49.409444 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="sg-core" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409450 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="sg-core" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409648 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-notification-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409671 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="sg-core" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409687 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="ceilometer-central-agent" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.409700 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1f98a54-6f3d-4171-8389-507c99701317" containerName="proxy-httpd" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.411518 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.414571 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.418152 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.418930 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.553829 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.553905 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.553996 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6ll\" (UniqueName: \"kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.554277 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.554585 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.554646 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.554672 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656145 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656203 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656222 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656261 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656288 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656360 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6ll\" (UniqueName: \"kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656420 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.656864 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.657165 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.662755 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.668557 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.673414 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.673517 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.679242 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6ll\" (UniqueName: \"kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll\") pod \"ceilometer-0\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " pod="openstack/ceilometer-0" Jan 21 11:00:49 crc kubenswrapper[4745]: I0121 11:00:49.739121 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:00:50 crc kubenswrapper[4745]: I0121 11:00:50.029735 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1f98a54-6f3d-4171-8389-507c99701317" path="/var/lib/kubelet/pods/c1f98a54-6f3d-4171-8389-507c99701317/volumes" Jan 21 11:00:50 crc kubenswrapper[4745]: I0121 11:00:50.753832 4745 generic.go:334] "Generic (PLEG): container finished" podID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerID="8f5f7bc01ddb73a6c9d98f33675a22c566c1c30950639e4f4a2083eabc92ed40" exitCode=137 Jan 21 11:00:50 crc kubenswrapper[4745]: I0121 11:00:50.753911 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerDied","Data":"8f5f7bc01ddb73a6c9d98f33675a22c566c1c30950639e4f4a2083eabc92ed40"} Jan 21 11:00:51 crc kubenswrapper[4745]: I0121 11:00:51.844733 4745 scope.go:117] "RemoveContainer" containerID="7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1" Jan 21 11:00:51 crc kubenswrapper[4745]: I0121 11:00:51.901096 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.129602 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.130014 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="dnsmasq-dns" containerID="cri-o://700596e49bad937b21e0b51081168e64e99fd5d4dd81f900c89cf80b9cbc9a60" gracePeriod=10 Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.192061 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.209824 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.237906 4745 scope.go:117] "RemoveContainer" containerID="1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.553333 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.631782 4745 scope.go:117] "RemoveContainer" containerID="0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.765356 4745 scope.go:117] "RemoveContainer" containerID="2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9" Jan 21 11:00:52 crc kubenswrapper[4745]: E0121 11:00:52.770713 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9\": container with ID starting with 2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9 not found: ID does not exist" containerID="2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.770765 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9"} err="failed to get container status \"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9\": rpc error: code = NotFound desc = could not find container \"2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9\": container with ID starting with 2496ceaa28c742de1d5dcd37c5d9c6ff7e25727a1a1f90753771906c1f78ebf9 not found: ID does not exist" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.770794 4745 scope.go:117] "RemoveContainer" containerID="7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1" Jan 21 11:00:52 crc kubenswrapper[4745]: E0121 11:00:52.776057 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1\": container with ID starting with 7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1 not found: ID does not exist" containerID="7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.776109 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1"} err="failed to get container status \"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1\": rpc error: code = NotFound desc = could not find container \"7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1\": container with ID starting with 7047822a6e0149037b5ca3ffe8051845e991ce5f32d0c9c8f17574a5da8c0ac1 not found: ID does not exist" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.776139 4745 scope.go:117] "RemoveContainer" containerID="1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228" Jan 21 11:00:52 crc kubenswrapper[4745]: E0121 11:00:52.778451 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228\": container with ID starting with 1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228 not found: ID does not exist" containerID="1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.778482 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228"} err="failed to get container status \"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228\": rpc error: code = NotFound desc = could not find container \"1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228\": container with ID starting with 1f68c9e69ac44389e4fc1f27195279eb98a2ed6350f3b1469f7e6f8784a8d228 not found: ID does not exist" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.778500 4745 scope.go:117] "RemoveContainer" containerID="0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6" Jan 21 11:00:52 crc kubenswrapper[4745]: E0121 11:00:52.785767 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6\": container with ID starting with 0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6 not found: ID does not exist" containerID="0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.785967 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6"} err="failed to get container status \"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6\": rpc error: code = NotFound desc = could not find container \"0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6\": container with ID starting with 0f26088e16968e0c78c6356788bb27c38e57379f45cf3afd70133519c7fdb1c6 not found: ID does not exist" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.786043 4745 scope.go:117] "RemoveContainer" containerID="379551ea665f8240a2a6912e8cabdcc3ee0f825c366fa7f7368ad2258467570f" Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.807824 4745 generic.go:334] "Generic (PLEG): container finished" podID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerID="700596e49bad937b21e0b51081168e64e99fd5d4dd81f900c89cf80b9cbc9a60" exitCode=0 Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.807917 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerDied","Data":"700596e49bad937b21e0b51081168e64e99fd5d4dd81f900c89cf80b9cbc9a60"} Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.823475 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cdbfc4d4d-pm6ln" event={"ID":"1b30531d-e957-4efd-b09c-d5d0b5fd1382","Type":"ContainerStarted","Data":"bc25ea9aad0810f70da3c41d507aec49871dec0c9b0ace9595b9370aa57e5cb5"} Jan 21 11:00:52 crc kubenswrapper[4745]: I0121 11:00:52.825853 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerStarted","Data":"a99d44cd8eb3b378b08e8868fb143b04e75d4bc0012bfcfb633bef4eb10fc416"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.507245 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.569403 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w27tt" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:00:53 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:00:53 crc kubenswrapper[4745]: > Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586308 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586368 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68b5t\" (UniqueName: \"kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586395 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586512 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586634 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.586675 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb\") pod \"e3c396b1-66bf-4ba4-a9ac-09682839253d\" (UID: \"e3c396b1-66bf-4ba4-a9ac-09682839253d\") " Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.610083 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t" (OuterVolumeSpecName: "kube-api-access-68b5t") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "kube-api-access-68b5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.689704 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68b5t\" (UniqueName: \"kubernetes.io/projected/e3c396b1-66bf-4ba4-a9ac-09682839253d-kube-api-access-68b5t\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.790531 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.792008 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.850333 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerStarted","Data":"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.863525 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41dde358-5a20-4b61-bb73-7a73962de599","Type":"ContainerStarted","Data":"cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.863618 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="41dde358-5a20-4b61-bb73-7a73962de599" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2" gracePeriod=30 Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.870569 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cac032-5f94-44ed-86e0-516ccb45d6d6","Type":"ContainerStarted","Data":"ad01a4eb3f6ef59f4363d85c6439684b1e5103067db4b36b2a5f62254bfc4ef1"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.872862 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerStarted","Data":"a36a4292ae7d8b12b599957231b398270a4f32b1bbc8362b867fc32f932686a0"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.872887 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerStarted","Data":"9ac5b0ae2bfe2e576f2a3ff5316090ba36d767ff44014e130b2b137efa3d7efc"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.872987 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-log" containerID="cri-o://9ac5b0ae2bfe2e576f2a3ff5316090ba36d767ff44014e130b2b137efa3d7efc" gracePeriod=30 Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.873212 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-metadata" containerID="cri-o://a36a4292ae7d8b12b599957231b398270a4f32b1bbc8362b867fc32f932686a0" gracePeriod=30 Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.875993 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" event={"ID":"e3c396b1-66bf-4ba4-a9ac-09682839253d","Type":"ContainerDied","Data":"290de00e3da7e5ffdae163325ae16659a45d1edced39895dd9419f48c9cc2ea1"} Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.876038 4745 scope.go:117] "RemoveContainer" containerID="700596e49bad937b21e0b51081168e64e99fd5d4dd81f900c89cf80b9cbc9a60" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.876133 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-lf2zv" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.898389 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=4.170327877 podStartE2EDuration="12.8983725s" podCreationTimestamp="2026-01-21 11:00:41 +0000 UTC" firstStartedPulling="2026-01-21 11:00:43.194141228 +0000 UTC m=+1427.654928826" lastFinishedPulling="2026-01-21 11:00:51.922185851 +0000 UTC m=+1436.382973449" observedRunningTime="2026-01-21 11:00:53.887793941 +0000 UTC m=+1438.348581539" watchObservedRunningTime="2026-01-21 11:00:53.8983725 +0000 UTC m=+1438.359160098" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.910170 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.230466455 podStartE2EDuration="12.910153751s" podCreationTimestamp="2026-01-21 11:00:41 +0000 UTC" firstStartedPulling="2026-01-21 11:00:43.234999941 +0000 UTC m=+1427.695787539" lastFinishedPulling="2026-01-21 11:00:51.914687237 +0000 UTC m=+1436.375474835" observedRunningTime="2026-01-21 11:00:53.907415776 +0000 UTC m=+1438.368203374" watchObservedRunningTime="2026-01-21 11:00:53.910153751 +0000 UTC m=+1438.370941339" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.911110 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config" (OuterVolumeSpecName: "config") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.920374 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.936749 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.953824 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.157335522 podStartE2EDuration="12.953806551s" podCreationTimestamp="2026-01-21 11:00:41 +0000 UTC" firstStartedPulling="2026-01-21 11:00:43.117352154 +0000 UTC m=+1427.578139752" lastFinishedPulling="2026-01-21 11:00:51.913823193 +0000 UTC m=+1436.374610781" observedRunningTime="2026-01-21 11:00:53.949137973 +0000 UTC m=+1438.409925571" watchObservedRunningTime="2026-01-21 11:00:53.953806551 +0000 UTC m=+1438.414594149" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.996730 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e3c396b1-66bf-4ba4-a9ac-09682839253d" (UID: "e3c396b1-66bf-4ba4-a9ac-09682839253d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.999059 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.999082 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.999092 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:53 crc kubenswrapper[4745]: I0121 11:00:53.999102 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e3c396b1-66bf-4ba4-a9ac-09682839253d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.129598 4745 scope.go:117] "RemoveContainer" containerID="049dbaf16ce9d5abe72313c217702d27c35494f9a09200c33599820b8794d98a" Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.212608 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.227199 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-lf2zv"] Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.887115 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerStarted","Data":"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd"} Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.889521 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerStarted","Data":"bc0b8277eb95ee7518305e5342fe22b592cb5fdea835c6d6c6d50f5f4b119b82"} Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.891062 4745 generic.go:334] "Generic (PLEG): container finished" podID="d9424a04-b40b-4947-96d4-9bd611993127" containerID="9ac5b0ae2bfe2e576f2a3ff5316090ba36d767ff44014e130b2b137efa3d7efc" exitCode=143 Jan 21 11:00:54 crc kubenswrapper[4745]: I0121 11:00:54.891138 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerDied","Data":"9ac5b0ae2bfe2e576f2a3ff5316090ba36d767ff44014e130b2b137efa3d7efc"} Jan 21 11:00:55 crc kubenswrapper[4745]: I0121 11:00:55.913664 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerStarted","Data":"0d837b4b6dc0579edf34828d4c9349a63ed870fe82cedd572364ef82fa477815"} Jan 21 11:00:55 crc kubenswrapper[4745]: I0121 11:00:55.913980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerStarted","Data":"a595a994a4364ac9c55172494ff1a81834bf90a58e2b3c89050af60127a86016"} Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.023981 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" path="/var/lib/kubelet/pods/e3c396b1-66bf-4ba4-a9ac-09682839253d/volumes" Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.061834 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=7.228962831 podStartE2EDuration="16.061815431s" podCreationTimestamp="2026-01-21 11:00:40 +0000 UTC" firstStartedPulling="2026-01-21 11:00:43.06250473 +0000 UTC m=+1427.523292328" lastFinishedPulling="2026-01-21 11:00:51.89535733 +0000 UTC m=+1436.356144928" observedRunningTime="2026-01-21 11:00:54.909669926 +0000 UTC m=+1439.370457524" watchObservedRunningTime="2026-01-21 11:00:56.061815431 +0000 UTC m=+1440.522603029" Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.553425 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.553481 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.884343 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:00:56 crc kubenswrapper[4745]: I0121 11:00:56.899771 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:00:57 crc kubenswrapper[4745]: I0121 11:00:57.937784 4745 generic.go:334] "Generic (PLEG): container finished" podID="1d06e6bd-564b-441c-8672-3c170053407d" containerID="1e64c8b49c4937475f2f4a6885064c0c5678a1800a000cab12c53d4ba828cf95" exitCode=0 Jan 21 11:00:57 crc kubenswrapper[4745]: I0121 11:00:57.937863 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dnktc" event={"ID":"1d06e6bd-564b-441c-8672-3c170053407d","Type":"ContainerDied","Data":"1e64c8b49c4937475f2f4a6885064c0c5678a1800a000cab12c53d4ba828cf95"} Jan 21 11:00:57 crc kubenswrapper[4745]: I0121 11:00:57.941434 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerStarted","Data":"eab60d880237ec1ae529d52a975ed5eaf22dd7cad609a57c3b1f071c91b8aa2f"} Jan 21 11:00:57 crc kubenswrapper[4745]: I0121 11:00:57.941574 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:00:58 crc kubenswrapper[4745]: I0121 11:00:58.005721 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.735408788 podStartE2EDuration="9.005698829s" podCreationTimestamp="2026-01-21 11:00:49 +0000 UTC" firstStartedPulling="2026-01-21 11:00:52.631772194 +0000 UTC m=+1437.092559792" lastFinishedPulling="2026-01-21 11:00:56.902062235 +0000 UTC m=+1441.362849833" observedRunningTime="2026-01-21 11:00:57.993834266 +0000 UTC m=+1442.454621874" watchObservedRunningTime="2026-01-21 11:00:58.005698829 +0000 UTC m=+1442.466486427" Jan 21 11:00:58 crc kubenswrapper[4745]: I0121 11:00:58.950787 4745 generic.go:334] "Generic (PLEG): container finished" podID="d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" containerID="f5ac36da53d52a33cac774a3f877b28b621528750d100a68c55e9ede4f809ba4" exitCode=0 Jan 21 11:00:58 crc kubenswrapper[4745]: I0121 11:00:58.950872 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" event={"ID":"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c","Type":"ContainerDied","Data":"f5ac36da53d52a33cac774a3f877b28b621528750d100a68c55e9ede4f809ba4"} Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.475590 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.639398 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwdlr\" (UniqueName: \"kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr\") pod \"1d06e6bd-564b-441c-8672-3c170053407d\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.639655 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data\") pod \"1d06e6bd-564b-441c-8672-3c170053407d\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.639721 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts\") pod \"1d06e6bd-564b-441c-8672-3c170053407d\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.639764 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle\") pod \"1d06e6bd-564b-441c-8672-3c170053407d\" (UID: \"1d06e6bd-564b-441c-8672-3c170053407d\") " Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.648774 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr" (OuterVolumeSpecName: "kube-api-access-rwdlr") pod "1d06e6bd-564b-441c-8672-3c170053407d" (UID: "1d06e6bd-564b-441c-8672-3c170053407d"). InnerVolumeSpecName "kube-api-access-rwdlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.648862 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts" (OuterVolumeSpecName: "scripts") pod "1d06e6bd-564b-441c-8672-3c170053407d" (UID: "1d06e6bd-564b-441c-8672-3c170053407d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.672464 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d06e6bd-564b-441c-8672-3c170053407d" (UID: "1d06e6bd-564b-441c-8672-3c170053407d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.684666 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data" (OuterVolumeSpecName: "config-data") pod "1d06e6bd-564b-441c-8672-3c170053407d" (UID: "1d06e6bd-564b-441c-8672-3c170053407d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.742258 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwdlr\" (UniqueName: \"kubernetes.io/projected/1d06e6bd-564b-441c-8672-3c170053407d-kube-api-access-rwdlr\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.742622 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.742638 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.742651 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d06e6bd-564b-441c-8672-3c170053407d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.961005 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-dnktc" Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.966916 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-dnktc" event={"ID":"1d06e6bd-564b-441c-8672-3c170053407d","Type":"ContainerDied","Data":"4eee74e17a312fffad39b629eb84e3f9e3dfaff9bdbb2897eeab7e781b424bd4"} Jan 21 11:00:59 crc kubenswrapper[4745]: I0121 11:00:59.966966 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eee74e17a312fffad39b629eb84e3f9e3dfaff9bdbb2897eeab7e781b424bd4" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.030487 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.030602 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.201602 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.201850 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-log" containerID="cri-o://007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00" gracePeriod=30 Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.202280 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-api" containerID="cri-o://3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd" gracePeriod=30 Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.253520 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483221-n75hv"] Jan 21 11:01:00 crc kubenswrapper[4745]: E0121 11:01:00.254029 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="init" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254047 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="init" Jan 21 11:01:00 crc kubenswrapper[4745]: E0121 11:01:00.254064 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="dnsmasq-dns" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254072 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="dnsmasq-dns" Jan 21 11:01:00 crc kubenswrapper[4745]: E0121 11:01:00.254079 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d06e6bd-564b-441c-8672-3c170053407d" containerName="nova-manage" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254084 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d06e6bd-564b-441c-8672-3c170053407d" containerName="nova-manage" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254275 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3c396b1-66bf-4ba4-a9ac-09682839253d" containerName="dnsmasq-dns" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254288 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d06e6bd-564b-441c-8672-3c170053407d" containerName="nova-manage" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.254976 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.256414 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6mx5\" (UniqueName: \"kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.256508 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.256582 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.256618 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.322608 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483221-n75hv"] Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.358897 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6mx5\" (UniqueName: \"kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.358997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.359052 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.359081 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.391611 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.405108 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.405321 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" containerName="nova-scheduler-scheduler" containerID="cri-o://ad01a4eb3f6ef59f4363d85c6439684b1e5103067db4b36b2a5f62254bfc4ef1" gracePeriod=30 Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.429421 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.445778 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.446008 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6mx5\" (UniqueName: \"kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5\") pod \"keystone-cron-29483221-n75hv\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.553203 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.565212 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts\") pod \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.565296 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle\") pod \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.565405 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xv9ws\" (UniqueName: \"kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws\") pod \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.565569 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data\") pod \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\" (UID: \"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c\") " Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.573100 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws" (OuterVolumeSpecName: "kube-api-access-xv9ws") pod "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" (UID: "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c"). InnerVolumeSpecName "kube-api-access-xv9ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.593957 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts" (OuterVolumeSpecName: "scripts") pod "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" (UID: "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.616008 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.662717 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" (UID: "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.670021 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.670071 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.670087 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xv9ws\" (UniqueName: \"kubernetes.io/projected/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-kube-api-access-xv9ws\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.717699 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data" (OuterVolumeSpecName: "config-data") pod "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" (UID: "d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.773282 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.971834 4745 generic.go:334] "Generic (PLEG): container finished" podID="4234d880-d98a-456c-8268-3495854d4d9a" containerID="007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00" exitCode=143 Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.971905 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerDied","Data":"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00"} Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.974004 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" event={"ID":"d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c","Type":"ContainerDied","Data":"e0734b22afff7fbd72f7b6049821244f2b08d2387a8949c72dfd7abd4e248ad1"} Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.974057 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nrtlg" Jan 21 11:01:00 crc kubenswrapper[4745]: I0121 11:01:00.974062 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0734b22afff7fbd72f7b6049821244f2b08d2387a8949c72dfd7abd4e248ad1" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.069456 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:01:01 crc kubenswrapper[4745]: E0121 11:01:01.069981 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" containerName="nova-cell1-conductor-db-sync" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.069999 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" containerName="nova-cell1-conductor-db-sync" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.070202 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" containerName="nova-cell1-conductor-db-sync" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.070937 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.076062 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.081080 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.081257 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs9zr\" (UniqueName: \"kubernetes.io/projected/498ecedf-353c-4497-98b8-202c4ce5dd29-kube-api-access-vs9zr\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.081279 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.081766 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.193299 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.193575 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs9zr\" (UniqueName: \"kubernetes.io/projected/498ecedf-353c-4497-98b8-202c4ce5dd29-kube-api-access-vs9zr\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.193602 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.200229 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.199734 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483221-n75hv"] Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.202250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/498ecedf-353c-4497-98b8-202c4ce5dd29-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.216659 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs9zr\" (UniqueName: \"kubernetes.io/projected/498ecedf-353c-4497-98b8-202c4ce5dd29-kube-api-access-vs9zr\") pod \"nova-cell1-conductor-0\" (UID: \"498ecedf-353c-4497-98b8-202c4ce5dd29\") " pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.395456 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.962101 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.983272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"498ecedf-353c-4497-98b8-202c4ce5dd29","Type":"ContainerStarted","Data":"b62aae6655313ed0e9f5b7a9af9d86bb51680f32b53f77c3c26e775e40ea5083"} Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.986166 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483221-n75hv" event={"ID":"c9657c82-86ad-461b-af13-737409270945","Type":"ContainerStarted","Data":"4b42fca1ae7e45ee4c2631cb27eff9f2c92fca32e817d2dd95316d1668186f54"} Jan 21 11:01:01 crc kubenswrapper[4745]: I0121 11:01:01.987282 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483221-n75hv" event={"ID":"c9657c82-86ad-461b-af13-737409270945","Type":"ContainerStarted","Data":"d5c69de1dd4fc0c8f1fe238db82f41298962a9c1ccdebb054d488e0dd92edbf8"} Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.882481 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.904115 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483221-n75hv" podStartSLOduration=2.904096581 podStartE2EDuration="2.904096581s" podCreationTimestamp="2026-01-21 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:02.011597224 +0000 UTC m=+1446.472384822" watchObservedRunningTime="2026-01-21 11:01:02.904096581 +0000 UTC m=+1447.364884169" Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.937805 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs\") pod \"4234d880-d98a-456c-8268-3495854d4d9a\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.937862 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data\") pod \"4234d880-d98a-456c-8268-3495854d4d9a\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.938501 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs" (OuterVolumeSpecName: "logs") pod "4234d880-d98a-456c-8268-3495854d4d9a" (UID: "4234d880-d98a-456c-8268-3495854d4d9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.938634 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l5hg\" (UniqueName: \"kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg\") pod \"4234d880-d98a-456c-8268-3495854d4d9a\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.938770 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle\") pod \"4234d880-d98a-456c-8268-3495854d4d9a\" (UID: \"4234d880-d98a-456c-8268-3495854d4d9a\") " Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.939249 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4234d880-d98a-456c-8268-3495854d4d9a-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:02 crc kubenswrapper[4745]: I0121 11:01:02.948444 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg" (OuterVolumeSpecName: "kube-api-access-4l5hg") pod "4234d880-d98a-456c-8268-3495854d4d9a" (UID: "4234d880-d98a-456c-8268-3495854d4d9a"). InnerVolumeSpecName "kube-api-access-4l5hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.003617 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4234d880-d98a-456c-8268-3495854d4d9a" (UID: "4234d880-d98a-456c-8268-3495854d4d9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.013661 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data" (OuterVolumeSpecName: "config-data") pod "4234d880-d98a-456c-8268-3495854d4d9a" (UID: "4234d880-d98a-456c-8268-3495854d4d9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.028081 4745 generic.go:334] "Generic (PLEG): container finished" podID="4234d880-d98a-456c-8268-3495854d4d9a" containerID="3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd" exitCode=0 Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.028278 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerDied","Data":"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd"} Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.028324 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4234d880-d98a-456c-8268-3495854d4d9a","Type":"ContainerDied","Data":"b69072ac3de261cde4fc70a97ade80a2c5740440d6d336ed3526ae098c22188f"} Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.028347 4745 scope.go:117] "RemoveContainer" containerID="3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.028252 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.037892 4745 generic.go:334] "Generic (PLEG): container finished" podID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" containerID="ad01a4eb3f6ef59f4363d85c6439684b1e5103067db4b36b2a5f62254bfc4ef1" exitCode=0 Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.038059 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cac032-5f94-44ed-86e0-516ccb45d6d6","Type":"ContainerDied","Data":"ad01a4eb3f6ef59f4363d85c6439684b1e5103067db4b36b2a5f62254bfc4ef1"} Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.041169 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.041202 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l5hg\" (UniqueName: \"kubernetes.io/projected/4234d880-d98a-456c-8268-3495854d4d9a-kube-api-access-4l5hg\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.041214 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4234d880-d98a-456c-8268-3495854d4d9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.043214 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"498ecedf-353c-4497-98b8-202c4ce5dd29","Type":"ContainerStarted","Data":"c7251c392d79f78e61b75813a486c718020ce01b4e70f65f698dd62895bf8a5f"} Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.043470 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.072035 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.072011489 podStartE2EDuration="2.072011489s" podCreationTimestamp="2026-01-21 11:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:03.069232393 +0000 UTC m=+1447.530019991" watchObservedRunningTime="2026-01-21 11:01:03.072011489 +0000 UTC m=+1447.532799087" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.144485 4745 scope.go:117] "RemoveContainer" containerID="007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.205301 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.208719 4745 scope.go:117] "RemoveContainer" containerID="3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd" Jan 21 11:01:03 crc kubenswrapper[4745]: E0121 11:01:03.212646 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd\": container with ID starting with 3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd not found: ID does not exist" containerID="3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.212681 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd"} err="failed to get container status \"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd\": rpc error: code = NotFound desc = could not find container \"3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd\": container with ID starting with 3db0bf8cf28ac1f7f396bf39d548e680dbbda2e04f40bf73c3c4f7585d7fdabd not found: ID does not exist" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.212704 4745 scope.go:117] "RemoveContainer" containerID="007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00" Jan 21 11:01:03 crc kubenswrapper[4745]: E0121 11:01:03.213214 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00\": container with ID starting with 007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00 not found: ID does not exist" containerID="007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.213232 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00"} err="failed to get container status \"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00\": rpc error: code = NotFound desc = could not find container \"007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00\": container with ID starting with 007748a8d4bbafe0bbd2e278c48989e312f0547d998de02539f18dda4e535c00 not found: ID does not exist" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.218583 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.227180 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:03 crc kubenswrapper[4745]: E0121 11:01:03.227607 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-api" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.227626 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-api" Jan 21 11:01:03 crc kubenswrapper[4745]: E0121 11:01:03.227646 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-log" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.227653 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-log" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.227817 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-log" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.227841 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4234d880-d98a-456c-8268-3495854d4d9a" containerName="nova-api-api" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.228748 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.234514 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.235329 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.258838 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w27tt" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" probeResult="failure" output=< Jan 21 11:01:03 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:01:03 crc kubenswrapper[4745]: > Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.291629 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.351190 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.351252 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jsw8\" (UniqueName: \"kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.351502 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.351619 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453085 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t8zfx\" (UniqueName: \"kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx\") pod \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453163 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle\") pod \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453245 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data\") pod \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\" (UID: \"f6cac032-5f94-44ed-86e0-516ccb45d6d6\") " Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453582 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453670 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453741 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.453805 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jsw8\" (UniqueName: \"kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.454964 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.458633 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.464696 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx" (OuterVolumeSpecName: "kube-api-access-t8zfx") pod "f6cac032-5f94-44ed-86e0-516ccb45d6d6" (UID: "f6cac032-5f94-44ed-86e0-516ccb45d6d6"). InnerVolumeSpecName "kube-api-access-t8zfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.488905 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.502132 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jsw8\" (UniqueName: \"kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8\") pod \"nova-api-0\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.510448 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data" (OuterVolumeSpecName: "config-data") pod "f6cac032-5f94-44ed-86e0-516ccb45d6d6" (UID: "f6cac032-5f94-44ed-86e0-516ccb45d6d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.555508 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t8zfx\" (UniqueName: \"kubernetes.io/projected/f6cac032-5f94-44ed-86e0-516ccb45d6d6-kube-api-access-t8zfx\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.555555 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.557387 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6cac032-5f94-44ed-86e0-516ccb45d6d6" (UID: "f6cac032-5f94-44ed-86e0-516ccb45d6d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.589022 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:03 crc kubenswrapper[4745]: I0121 11:01:03.657052 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6cac032-5f94-44ed-86e0-516ccb45d6d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.018668 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4234d880-d98a-456c-8268-3495854d4d9a" path="/var/lib/kubelet/pods/4234d880-d98a-456c-8268-3495854d4d9a/volumes" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.093617 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.093667 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f6cac032-5f94-44ed-86e0-516ccb45d6d6","Type":"ContainerDied","Data":"99d3ea570180ed5ecdc33bb30cfafc746ba671eaf0cce6f91c40fa286ac5d912"} Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.093805 4745 scope.go:117] "RemoveContainer" containerID="ad01a4eb3f6ef59f4363d85c6439684b1e5103067db4b36b2a5f62254bfc4ef1" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.262560 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.285821 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.298095 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:04 crc kubenswrapper[4745]: E0121 11:01:04.298613 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" containerName="nova-scheduler-scheduler" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.298634 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" containerName="nova-scheduler-scheduler" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.298887 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" containerName="nova-scheduler-scheduler" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.300394 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.309410 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.310918 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.324194 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.483653 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.484516 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t664d\" (UniqueName: \"kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.484663 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.586631 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.586778 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t664d\" (UniqueName: \"kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.586931 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.598287 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.598849 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.635162 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t664d\" (UniqueName: \"kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d\") pod \"nova-scheduler-0\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:04 crc kubenswrapper[4745]: I0121 11:01:04.642983 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:05 crc kubenswrapper[4745]: I0121 11:01:05.147329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerStarted","Data":"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753"} Jan 21 11:01:05 crc kubenswrapper[4745]: I0121 11:01:05.147624 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerStarted","Data":"4f2cffc7afbcee089300d591514899548bd9eddf3ca99d028b1aae5e86e18ace"} Jan 21 11:01:05 crc kubenswrapper[4745]: I0121 11:01:05.323499 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.011470 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6cac032-5f94-44ed-86e0-516ccb45d6d6" path="/var/lib/kubelet/pods/f6cac032-5f94-44ed-86e0-516ccb45d6d6/volumes" Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.189137 4745 generic.go:334] "Generic (PLEG): container finished" podID="c9657c82-86ad-461b-af13-737409270945" containerID="4b42fca1ae7e45ee4c2631cb27eff9f2c92fca32e817d2dd95316d1668186f54" exitCode=0 Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.189238 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483221-n75hv" event={"ID":"c9657c82-86ad-461b-af13-737409270945","Type":"ContainerDied","Data":"4b42fca1ae7e45ee4c2631cb27eff9f2c92fca32e817d2dd95316d1668186f54"} Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.196630 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4ecb1759-26cf-453e-ae21-b393c94475df","Type":"ContainerStarted","Data":"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089"} Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.197498 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4ecb1759-26cf-453e-ae21-b393c94475df","Type":"ContainerStarted","Data":"e1d026903861e5bbe54597483e2aa9aee4564b6e2ba3601913fdf9690d732963"} Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.206183 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerStarted","Data":"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255"} Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.235882 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.235866651 podStartE2EDuration="2.235866651s" podCreationTimestamp="2026-01-21 11:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:06.230344601 +0000 UTC m=+1450.691132199" watchObservedRunningTime="2026-01-21 11:01:06.235866651 +0000 UTC m=+1450.696654249" Jan 21 11:01:06 crc kubenswrapper[4745]: I0121 11:01:06.271575 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.271554314 podStartE2EDuration="3.271554314s" podCreationTimestamp="2026-01-21 11:01:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:06.258735515 +0000 UTC m=+1450.719523113" watchObservedRunningTime="2026-01-21 11:01:06.271554314 +0000 UTC m=+1450.732341922" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.646616 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.791323 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle\") pod \"c9657c82-86ad-461b-af13-737409270945\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.791420 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data\") pod \"c9657c82-86ad-461b-af13-737409270945\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.791485 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys\") pod \"c9657c82-86ad-461b-af13-737409270945\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.791624 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6mx5\" (UniqueName: \"kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5\") pod \"c9657c82-86ad-461b-af13-737409270945\" (UID: \"c9657c82-86ad-461b-af13-737409270945\") " Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.799785 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c9657c82-86ad-461b-af13-737409270945" (UID: "c9657c82-86ad-461b-af13-737409270945"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.803691 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5" (OuterVolumeSpecName: "kube-api-access-f6mx5") pod "c9657c82-86ad-461b-af13-737409270945" (UID: "c9657c82-86ad-461b-af13-737409270945"). InnerVolumeSpecName "kube-api-access-f6mx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.825729 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c9657c82-86ad-461b-af13-737409270945" (UID: "c9657c82-86ad-461b-af13-737409270945"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.865112 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data" (OuterVolumeSpecName: "config-data") pod "c9657c82-86ad-461b-af13-737409270945" (UID: "c9657c82-86ad-461b-af13-737409270945"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.893706 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.893735 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.893743 4745 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c9657c82-86ad-461b-af13-737409270945-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:07 crc kubenswrapper[4745]: I0121 11:01:07.893754 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6mx5\" (UniqueName: \"kubernetes.io/projected/c9657c82-86ad-461b-af13-737409270945-kube-api-access-f6mx5\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:08 crc kubenswrapper[4745]: I0121 11:01:08.235060 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483221-n75hv" Jan 21 11:01:08 crc kubenswrapper[4745]: I0121 11:01:08.235178 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483221-n75hv" event={"ID":"c9657c82-86ad-461b-af13-737409270945","Type":"ContainerDied","Data":"d5c69de1dd4fc0c8f1fe238db82f41298962a9c1ccdebb054d488e0dd92edbf8"} Jan 21 11:01:08 crc kubenswrapper[4745]: I0121 11:01:08.235217 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5c69de1dd4fc0c8f1fe238db82f41298962a9c1ccdebb054d488e0dd92edbf8" Jan 21 11:01:09 crc kubenswrapper[4745]: I0121 11:01:09.644492 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:01:10 crc kubenswrapper[4745]: I0121 11:01:10.037334 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-5cdbfc4d4d-pm6ln" podUID="1b30531d-e957-4efd-b09c-d5d0b5fd1382" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Jan 21 11:01:11 crc kubenswrapper[4745]: I0121 11:01:11.460039 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 21 11:01:12 crc kubenswrapper[4745]: I0121 11:01:12.249891 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:01:12 crc kubenswrapper[4745]: I0121 11:01:12.304969 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:01:12 crc kubenswrapper[4745]: I0121 11:01:12.504831 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:01:13 crc kubenswrapper[4745]: I0121 11:01:13.272778 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w27tt" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" containerID="cri-o://ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48" gracePeriod=2 Jan 21 11:01:13 crc kubenswrapper[4745]: I0121 11:01:13.590503 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:01:13 crc kubenswrapper[4745]: I0121 11:01:13.590883 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:01:13 crc kubenswrapper[4745]: I0121 11:01:13.876860 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.009588 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zs6d\" (UniqueName: \"kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d\") pod \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.009687 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content\") pod \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.009747 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities\") pod \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\" (UID: \"7c943e44-5a8c-4f32-a615-4126fcb73e6a\") " Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.012923 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities" (OuterVolumeSpecName: "utilities") pod "7c943e44-5a8c-4f32-a615-4126fcb73e6a" (UID: "7c943e44-5a8c-4f32-a615-4126fcb73e6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.034917 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d" (OuterVolumeSpecName: "kube-api-access-4zs6d") pod "7c943e44-5a8c-4f32-a615-4126fcb73e6a" (UID: "7c943e44-5a8c-4f32-a615-4126fcb73e6a"). InnerVolumeSpecName "kube-api-access-4zs6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.115636 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zs6d\" (UniqueName: \"kubernetes.io/projected/7c943e44-5a8c-4f32-a615-4126fcb73e6a-kube-api-access-4zs6d\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.115715 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.137668 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c943e44-5a8c-4f32-a615-4126fcb73e6a" (UID: "7c943e44-5a8c-4f32-a615-4126fcb73e6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.217405 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c943e44-5a8c-4f32-a615-4126fcb73e6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.284651 4745 generic.go:334] "Generic (PLEG): container finished" podID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerID="ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48" exitCode=0 Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.284704 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerDied","Data":"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48"} Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.284739 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w27tt" event={"ID":"7c943e44-5a8c-4f32-a615-4126fcb73e6a","Type":"ContainerDied","Data":"98e8c5ba073a8efd3856bae9a4fdcff6bc414e1d1b559a31990dcc0eefd9cf36"} Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.284763 4745 scope.go:117] "RemoveContainer" containerID="ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.284931 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w27tt" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.328478 4745 scope.go:117] "RemoveContainer" containerID="ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.328748 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.336817 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w27tt"] Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.361740 4745 scope.go:117] "RemoveContainer" containerID="1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.428686 4745 scope.go:117] "RemoveContainer" containerID="ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48" Jan 21 11:01:14 crc kubenswrapper[4745]: E0121 11:01:14.431791 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48\": container with ID starting with ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48 not found: ID does not exist" containerID="ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.431828 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48"} err="failed to get container status \"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48\": rpc error: code = NotFound desc = could not find container \"ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48\": container with ID starting with ca76a2435b4c9588a055c35015a3598ea86128732080da7a3fd8530227719e48 not found: ID does not exist" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.431848 4745 scope.go:117] "RemoveContainer" containerID="ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1" Jan 21 11:01:14 crc kubenswrapper[4745]: E0121 11:01:14.442612 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1\": container with ID starting with ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1 not found: ID does not exist" containerID="ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.442641 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1"} err="failed to get container status \"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1\": rpc error: code = NotFound desc = could not find container \"ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1\": container with ID starting with ca7a56ceab2fa5788e2c0cc0ce607d3c3f0a4794f0f8575a4a4d828f3a8080e1 not found: ID does not exist" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.442661 4745 scope.go:117] "RemoveContainer" containerID="1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9" Jan 21 11:01:14 crc kubenswrapper[4745]: E0121 11:01:14.445072 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9\": container with ID starting with 1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9 not found: ID does not exist" containerID="1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.445134 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9"} err="failed to get container status \"1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9\": rpc error: code = NotFound desc = could not find container \"1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9\": container with ID starting with 1814582210b91ff18fd9b73897768c2dfd15ee210ab7f455def51b67f7a96bd9 not found: ID does not exist" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.644017 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.673917 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.673924 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.210:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:14 crc kubenswrapper[4745]: I0121 11:01:14.684069 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:01:15 crc kubenswrapper[4745]: I0121 11:01:15.329303 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:01:15 crc kubenswrapper[4745]: I0121 11:01:15.866430 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:01:15 crc kubenswrapper[4745]: I0121 11:01:15.866881 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:01:16 crc kubenswrapper[4745]: I0121 11:01:16.012550 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" path="/var/lib/kubelet/pods/7c943e44-5a8c-4f32-a615-4126fcb73e6a/volumes" Jan 21 11:01:19 crc kubenswrapper[4745]: I0121 11:01:19.752344 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:01:21 crc kubenswrapper[4745]: I0121 11:01:21.976181 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.595781 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.596204 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.596625 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.596645 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.601252 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.604515 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.877700 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:01:23 crc kubenswrapper[4745]: E0121 11:01:23.878120 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="extract-content" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878133 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="extract-content" Jan 21 11:01:23 crc kubenswrapper[4745]: E0121 11:01:23.878162 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878168 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" Jan 21 11:01:23 crc kubenswrapper[4745]: E0121 11:01:23.878189 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="extract-utilities" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878195 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="extract-utilities" Jan 21 11:01:23 crc kubenswrapper[4745]: E0121 11:01:23.878204 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9657c82-86ad-461b-af13-737409270945" containerName="keystone-cron" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878210 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9657c82-86ad-461b-af13-737409270945" containerName="keystone-cron" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878374 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c943e44-5a8c-4f32-a615-4126fcb73e6a" containerName="registry-server" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.878394 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9657c82-86ad-461b-af13-737409270945" containerName="keystone-cron" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.881208 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:23 crc kubenswrapper[4745]: I0121 11:01:23.901733 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.041749 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqsjn\" (UniqueName: \"kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.042067 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.042106 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.042148 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.042212 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.042274 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.143695 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.144093 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqsjn\" (UniqueName: \"kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.144203 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.144323 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.144438 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.144591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.145774 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.146880 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.147001 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.147296 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.152215 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.179514 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqsjn\" (UniqueName: \"kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn\") pod \"dnsmasq-dns-6b7bbf7cf9-d72v9\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.258867 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:24 crc kubenswrapper[4745]: E0121 11:01:24.379464 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41dde358_5a20_4b61_bb73_7a73962de599.slice/crio-conmon-cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41dde358_5a20_4b61_bb73_7a73962de599.slice/crio-cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.404969 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-5cdbfc4d4d-pm6ln" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.408811 4745 generic.go:334] "Generic (PLEG): container finished" podID="41dde358-5a20-4b61-bb73-7a73962de599" containerID="cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2" exitCode=137 Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.408856 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41dde358-5a20-4b61-bb73-7a73962de599","Type":"ContainerDied","Data":"cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2"} Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.521022 4745 generic.go:334] "Generic (PLEG): container finished" podID="d9424a04-b40b-4947-96d4-9bd611993127" containerID="a36a4292ae7d8b12b599957231b398270a4f32b1bbc8362b867fc32f932686a0" exitCode=137 Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.522021 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerDied","Data":"a36a4292ae7d8b12b599957231b398270a4f32b1bbc8362b867fc32f932686a0"} Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.534030 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.534285 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon-log" containerID="cri-o://9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a" gracePeriod=30 Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.534807 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" containerID="cri-o://167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3" gracePeriod=30 Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.677060 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.773930 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.776442 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data\") pod \"d9424a04-b40b-4947-96d4-9bd611993127\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.776569 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqx7\" (UniqueName: \"kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7\") pod \"d9424a04-b40b-4947-96d4-9bd611993127\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.776699 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs\") pod \"d9424a04-b40b-4947-96d4-9bd611993127\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.776854 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle\") pod \"d9424a04-b40b-4947-96d4-9bd611993127\" (UID: \"d9424a04-b40b-4947-96d4-9bd611993127\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.777988 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs" (OuterVolumeSpecName: "logs") pod "d9424a04-b40b-4947-96d4-9bd611993127" (UID: "d9424a04-b40b-4947-96d4-9bd611993127"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.802236 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7" (OuterVolumeSpecName: "kube-api-access-bgqx7") pod "d9424a04-b40b-4947-96d4-9bd611993127" (UID: "d9424a04-b40b-4947-96d4-9bd611993127"). InnerVolumeSpecName "kube-api-access-bgqx7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.815489 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data" (OuterVolumeSpecName: "config-data") pod "d9424a04-b40b-4947-96d4-9bd611993127" (UID: "d9424a04-b40b-4947-96d4-9bd611993127"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.818895 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9424a04-b40b-4947-96d4-9bd611993127" (UID: "d9424a04-b40b-4947-96d4-9bd611993127"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.880190 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle\") pod \"41dde358-5a20-4b61-bb73-7a73962de599\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.880695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data\") pod \"41dde358-5a20-4b61-bb73-7a73962de599\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.880759 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x84pg\" (UniqueName: \"kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg\") pod \"41dde358-5a20-4b61-bb73-7a73962de599\" (UID: \"41dde358-5a20-4b61-bb73-7a73962de599\") " Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.881483 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9424a04-b40b-4947-96d4-9bd611993127-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.881517 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.881560 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9424a04-b40b-4947-96d4-9bd611993127-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.881579 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqx7\" (UniqueName: \"kubernetes.io/projected/d9424a04-b40b-4947-96d4-9bd611993127-kube-api-access-bgqx7\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.883695 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg" (OuterVolumeSpecName: "kube-api-access-x84pg") pod "41dde358-5a20-4b61-bb73-7a73962de599" (UID: "41dde358-5a20-4b61-bb73-7a73962de599"). InnerVolumeSpecName "kube-api-access-x84pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.913503 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data" (OuterVolumeSpecName: "config-data") pod "41dde358-5a20-4b61-bb73-7a73962de599" (UID: "41dde358-5a20-4b61-bb73-7a73962de599"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.932754 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41dde358-5a20-4b61-bb73-7a73962de599" (UID: "41dde358-5a20-4b61-bb73-7a73962de599"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.983317 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.983368 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41dde358-5a20-4b61-bb73-7a73962de599-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:24 crc kubenswrapper[4745]: I0121 11:01:24.983387 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x84pg\" (UniqueName: \"kubernetes.io/projected/41dde358-5a20-4b61-bb73-7a73962de599-kube-api-access-x84pg\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:25 crc kubenswrapper[4745]: W0121 11:01:25.061226 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e321695_cccb_4fdf_b1cb_abae2afbfb93.slice/crio-28b5f8dc3854ad1b9243ca6ff7a22b60ce1d904c5706b8fb0bb8188bfb2206b4 WatchSource:0}: Error finding container 28b5f8dc3854ad1b9243ca6ff7a22b60ce1d904c5706b8fb0bb8188bfb2206b4: Status 404 returned error can't find the container with id 28b5f8dc3854ad1b9243ca6ff7a22b60ce1d904c5706b8fb0bb8188bfb2206b4 Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.062924 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.532401 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.532400 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"41dde358-5a20-4b61-bb73-7a73962de599","Type":"ContainerDied","Data":"0f8d411bf6f1222f723e097f11d37a598fc669643b67a5bcdb26a6b8b6ab3f85"} Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.532981 4745 scope.go:117] "RemoveContainer" containerID="cd622bb69e1010a418c885dc65ec8d126263330e15e36b9121620a4847684dc2" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.536054 4745 generic.go:334] "Generic (PLEG): container finished" podID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerID="f16449f2a2b8ba59b532abc0ec940b20ca498aac49deaa755572615a5c2d7b8d" exitCode=0 Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.536116 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" event={"ID":"9e321695-cccb-4fdf-b1cb-abae2afbfb93","Type":"ContainerDied","Data":"f16449f2a2b8ba59b532abc0ec940b20ca498aac49deaa755572615a5c2d7b8d"} Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.536145 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" event={"ID":"9e321695-cccb-4fdf-b1cb-abae2afbfb93","Type":"ContainerStarted","Data":"28b5f8dc3854ad1b9243ca6ff7a22b60ce1d904c5706b8fb0bb8188bfb2206b4"} Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.544935 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9424a04-b40b-4947-96d4-9bd611993127","Type":"ContainerDied","Data":"849d6f9df3e6d36d869697f23c7d482647be7b4083f2d03b6831c2d4efc06d2b"} Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.545228 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.572594 4745 scope.go:117] "RemoveContainer" containerID="a36a4292ae7d8b12b599957231b398270a4f32b1bbc8362b867fc32f932686a0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.610559 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.632132 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.655784 4745 scope.go:117] "RemoveContainer" containerID="9ac5b0ae2bfe2e576f2a3ff5316090ba36d767ff44014e130b2b137efa3d7efc" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.674282 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: E0121 11:01:25.674904 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-metadata" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.674928 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-metadata" Jan 21 11:01:25 crc kubenswrapper[4745]: E0121 11:01:25.674953 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41dde358-5a20-4b61-bb73-7a73962de599" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.674962 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="41dde358-5a20-4b61-bb73-7a73962de599" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:01:25 crc kubenswrapper[4745]: E0121 11:01:25.674971 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-log" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.674979 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-log" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.675213 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="41dde358-5a20-4b61-bb73-7a73962de599" containerName="nova-cell1-novncproxy-novncproxy" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.675237 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-log" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.675247 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9424a04-b40b-4947-96d4-9bd611993127" containerName="nova-metadata-metadata" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.676062 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.685917 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.686166 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.686275 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.773898 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.797679 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.825064 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.832222 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.834033 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.842619 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.842627 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.844238 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.844270 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.844288 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq4zb\" (UniqueName: \"kubernetes.io/projected/7ce186c4-d95d-4846-bf1e-db0cc6952fac-kube-api-access-sq4zb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.844345 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.844412 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.850879 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946012 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946421 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946458 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946480 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq4zb\" (UniqueName: \"kubernetes.io/projected/7ce186c4-d95d-4846-bf1e-db0cc6952fac-kube-api-access-sq4zb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946499 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946587 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946634 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946702 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt69\" (UniqueName: \"kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946729 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.946780 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.952006 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.952052 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.952304 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.952785 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ce186c4-d95d-4846-bf1e-db0cc6952fac-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:25 crc kubenswrapper[4745]: I0121 11:01:25.965089 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq4zb\" (UniqueName: \"kubernetes.io/projected/7ce186c4-d95d-4846-bf1e-db0cc6952fac-kube-api-access-sq4zb\") pod \"nova-cell1-novncproxy-0\" (UID: \"7ce186c4-d95d-4846-bf1e-db0cc6952fac\") " pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.010958 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41dde358-5a20-4b61-bb73-7a73962de599" path="/var/lib/kubelet/pods/41dde358-5a20-4b61-bb73-7a73962de599/volumes" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.011695 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9424a04-b40b-4947-96d4-9bd611993127" path="/var/lib/kubelet/pods/d9424a04-b40b-4947-96d4-9bd611993127/volumes" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.047472 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053038 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053186 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053242 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053295 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053570 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt69\" (UniqueName: \"kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.053979 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.057441 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.061161 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.073637 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.079720 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt69\" (UniqueName: \"kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69\") pod \"nova-metadata-0\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.180189 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.591266 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" event={"ID":"9e321695-cccb-4fdf-b1cb-abae2afbfb93","Type":"ContainerStarted","Data":"61e6ac1f12590196a89e51472d813326aa95ffbd3e8e06e8b0cc50efa6a09431"} Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.591551 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.626498 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" podStartSLOduration=3.626482078 podStartE2EDuration="3.626482078s" podCreationTimestamp="2026-01-21 11:01:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:26.62216723 +0000 UTC m=+1471.082954828" watchObservedRunningTime="2026-01-21 11:01:26.626482078 +0000 UTC m=+1471.087269666" Jan 21 11:01:26 crc kubenswrapper[4745]: I0121 11:01:26.919089 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:26 crc kubenswrapper[4745]: W0121 11:01:26.920891 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9741031c_f343_43b5_95f8_254aac8275ea.slice/crio-fe19053c0b4df984b1bf87a5ba8e4bbb7cc5ec1d5327c7a2e7df770ae75a0ff0 WatchSource:0}: Error finding container fe19053c0b4df984b1bf87a5ba8e4bbb7cc5ec1d5327c7a2e7df770ae75a0ff0: Status 404 returned error can't find the container with id fe19053c0b4df984b1bf87a5ba8e4bbb7cc5ec1d5327c7a2e7df770ae75a0ff0 Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.087843 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.386070 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.387983 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-api" containerID="cri-o://17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255" gracePeriod=30 Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.387886 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-log" containerID="cri-o://5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753" gracePeriod=30 Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.632563 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerStarted","Data":"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.632641 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerStarted","Data":"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.632655 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerStarted","Data":"fe19053c0b4df984b1bf87a5ba8e4bbb7cc5ec1d5327c7a2e7df770ae75a0ff0"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.643237 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ce186c4-d95d-4846-bf1e-db0cc6952fac","Type":"ContainerStarted","Data":"fc2bce70e0cd8f16e82a8def93da54cf826bc93f5512b23e5945cec623729acf"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.643276 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"7ce186c4-d95d-4846-bf1e-db0cc6952fac","Type":"ContainerStarted","Data":"fc5a95ea7346c96fbb8aaced74c77cdb00d1860e3abfca9cc039eb30a9df3678"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.651974 4745 generic.go:334] "Generic (PLEG): container finished" podID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerID="5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753" exitCode=143 Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.652202 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerDied","Data":"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753"} Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.663282 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.663257029 podStartE2EDuration="2.663257029s" podCreationTimestamp="2026-01-21 11:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:27.658020816 +0000 UTC m=+1472.118808424" watchObservedRunningTime="2026-01-21 11:01:27.663257029 +0000 UTC m=+1472.124044637" Jan 21 11:01:27 crc kubenswrapper[4745]: I0121 11:01:27.691821 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.691800287 podStartE2EDuration="2.691800287s" podCreationTimestamp="2026-01-21 11:01:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:27.681999749 +0000 UTC m=+1472.142787347" watchObservedRunningTime="2026-01-21 11:01:27.691800287 +0000 UTC m=+1472.152587885" Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.103139 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.103567 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-central-agent" containerID="cri-o://bc0b8277eb95ee7518305e5342fe22b592cb5fdea835c6d6c6d50f5f4b119b82" gracePeriod=30 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.103766 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="proxy-httpd" containerID="cri-o://eab60d880237ec1ae529d52a975ed5eaf22dd7cad609a57c3b1f071c91b8aa2f" gracePeriod=30 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.103807 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="sg-core" containerID="cri-o://0d837b4b6dc0579edf34828d4c9349a63ed870fe82cedd572364ef82fa477815" gracePeriod=30 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.103846 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-notification-agent" containerID="cri-o://a595a994a4364ac9c55172494ff1a81834bf90a58e2b3c89050af60127a86016" gracePeriod=30 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.168757 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.169100 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" containerName="kube-state-metrics" containerID="cri-o://2e8457d97d6f8c7a0b6f7fb524f7691d6db22f51ec5ca02805da55e3707b3daa" gracePeriod=30 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.674544 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d2746d8-86a1-412c-8cac-b737fff90886" containerID="167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3" exitCode=0 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.674574 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerDied","Data":"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3"} Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.675013 4745 scope.go:117] "RemoveContainer" containerID="3643118f481e7226b702137d2af839c8cf6efc660091c1400f2eeeabfda81e6f" Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.679913 4745 generic.go:334] "Generic (PLEG): container finished" podID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" containerID="2e8457d97d6f8c7a0b6f7fb524f7691d6db22f51ec5ca02805da55e3707b3daa" exitCode=2 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.680006 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd","Type":"ContainerDied","Data":"2e8457d97d6f8c7a0b6f7fb524f7691d6db22f51ec5ca02805da55e3707b3daa"} Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.680690 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.685111 4745 generic.go:334] "Generic (PLEG): container finished" podID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerID="eab60d880237ec1ae529d52a975ed5eaf22dd7cad609a57c3b1f071c91b8aa2f" exitCode=0 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.685142 4745 generic.go:334] "Generic (PLEG): container finished" podID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerID="0d837b4b6dc0579edf34828d4c9349a63ed870fe82cedd572364ef82fa477815" exitCode=2 Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.685292 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerDied","Data":"eab60d880237ec1ae529d52a975ed5eaf22dd7cad609a57c3b1f071c91b8aa2f"} Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.685348 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerDied","Data":"0d837b4b6dc0579edf34828d4c9349a63ed870fe82cedd572364ef82fa477815"} Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.760379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l67wg\" (UniqueName: \"kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg\") pod \"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd\" (UID: \"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd\") " Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.770004 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg" (OuterVolumeSpecName: "kube-api-access-l67wg") pod "6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" (UID: "6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd"). InnerVolumeSpecName "kube-api-access-l67wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:28 crc kubenswrapper[4745]: I0121 11:01:28.863344 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l67wg\" (UniqueName: \"kubernetes.io/projected/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd-kube-api-access-l67wg\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.695556 4745 generic.go:334] "Generic (PLEG): container finished" podID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerID="bc0b8277eb95ee7518305e5342fe22b592cb5fdea835c6d6c6d50f5f4b119b82" exitCode=0 Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.695786 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerDied","Data":"bc0b8277eb95ee7518305e5342fe22b592cb5fdea835c6d6c6d50f5f4b119b82"} Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.703801 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd","Type":"ContainerDied","Data":"d56b164aa52512e4a8781d210d74c2d1dec792343c46f816383d69656163a892"} Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.703965 4745 scope.go:117] "RemoveContainer" containerID="2e8457d97d6f8c7a0b6f7fb524f7691d6db22f51ec5ca02805da55e3707b3daa" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.703838 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.710695 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.760593 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.770570 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.788029 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:29 crc kubenswrapper[4745]: E0121 11:01:29.788709 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" containerName="kube-state-metrics" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.788782 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" containerName="kube-state-metrics" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.789038 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" containerName="kube-state-metrics" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.790095 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.792919 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.793223 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.833250 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.888213 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6slgc\" (UniqueName: \"kubernetes.io/projected/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-api-access-6slgc\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.888420 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.888466 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.888648 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.991328 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6slgc\" (UniqueName: \"kubernetes.io/projected/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-api-access-6slgc\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.991415 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.991439 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.991502 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:29 crc kubenswrapper[4745]: I0121 11:01:29.997029 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:29.999964 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:30.014049 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:30.020032 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd" path="/var/lib/kubelet/pods/6d9b85f9-734f-4948-8e8b-ad1a45e5c2fd/volumes" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:30.026442 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6slgc\" (UniqueName: \"kubernetes.io/projected/3d17af5b-6f17-42ef-a3fc-ceec818bb54f-kube-api-access-6slgc\") pod \"kube-state-metrics-0\" (UID: \"3d17af5b-6f17-42ef-a3fc-ceec818bb54f\") " pod="openstack/kube-state-metrics-0" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:30.110445 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 21 11:01:30 crc kubenswrapper[4745]: I0121 11:01:30.770776 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.049082 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.181437 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.181920 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.471396 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.635892 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data\") pod \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.636030 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle\") pod \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.636073 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jsw8\" (UniqueName: \"kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8\") pod \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.636142 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs\") pod \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\" (UID: \"a0e1441f-0704-4a1d-a961-8ecd9c24d40f\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.638858 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs" (OuterVolumeSpecName: "logs") pod "a0e1441f-0704-4a1d-a961-8ecd9c24d40f" (UID: "a0e1441f-0704-4a1d-a961-8ecd9c24d40f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.643579 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.644348 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8" (OuterVolumeSpecName: "kube-api-access-6jsw8") pod "a0e1441f-0704-4a1d-a961-8ecd9c24d40f" (UID: "a0e1441f-0704-4a1d-a961-8ecd9c24d40f"). InnerVolumeSpecName "kube-api-access-6jsw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.688873 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data" (OuterVolumeSpecName: "config-data") pod "a0e1441f-0704-4a1d-a961-8ecd9c24d40f" (UID: "a0e1441f-0704-4a1d-a961-8ecd9c24d40f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.745699 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.745736 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jsw8\" (UniqueName: \"kubernetes.io/projected/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-kube-api-access-6jsw8\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.747175 4745 generic.go:334] "Generic (PLEG): container finished" podID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerID="17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255" exitCode=0 Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.747224 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerDied","Data":"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255"} Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.747249 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a0e1441f-0704-4a1d-a961-8ecd9c24d40f","Type":"ContainerDied","Data":"4f2cffc7afbcee089300d591514899548bd9eddf3ca99d028b1aae5e86e18ace"} Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.747266 4745 scope.go:117] "RemoveContainer" containerID="17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.747381 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.748203 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a0e1441f-0704-4a1d-a961-8ecd9c24d40f" (UID: "a0e1441f-0704-4a1d-a961-8ecd9c24d40f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.770254 4745 generic.go:334] "Generic (PLEG): container finished" podID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerID="a595a994a4364ac9c55172494ff1a81834bf90a58e2b3c89050af60127a86016" exitCode=0 Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.770339 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerDied","Data":"a595a994a4364ac9c55172494ff1a81834bf90a58e2b3c89050af60127a86016"} Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.772281 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d17af5b-6f17-42ef-a3fc-ceec818bb54f","Type":"ContainerStarted","Data":"0e8199d56365cb71e812b1a12c459d9c2977722b9ab7bbbbc296aa4409f2e847"} Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.772306 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"3d17af5b-6f17-42ef-a3fc-ceec818bb54f","Type":"ContainerStarted","Data":"5ac68e7801dce2c5b4195df3893a15ee507510043056b3267dc596bb48c6aa82"} Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.773892 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.786168 4745 scope.go:117] "RemoveContainer" containerID="5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.810804 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.39683604 podStartE2EDuration="2.810783634s" podCreationTimestamp="2026-01-21 11:01:29 +0000 UTC" firstStartedPulling="2026-01-21 11:01:30.768307378 +0000 UTC m=+1475.229094976" lastFinishedPulling="2026-01-21 11:01:31.182254972 +0000 UTC m=+1475.643042570" observedRunningTime="2026-01-21 11:01:31.804310478 +0000 UTC m=+1476.265098076" watchObservedRunningTime="2026-01-21 11:01:31.810783634 +0000 UTC m=+1476.271571232" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.827828 4745 scope.go:117] "RemoveContainer" containerID="17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255" Jan 21 11:01:31 crc kubenswrapper[4745]: E0121 11:01:31.828364 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255\": container with ID starting with 17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255 not found: ID does not exist" containerID="17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.828410 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255"} err="failed to get container status \"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255\": rpc error: code = NotFound desc = could not find container \"17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255\": container with ID starting with 17ed02e17b83a020ad3396ff1552b1e4d3c9815bc80139441d5a9c38b7b75255 not found: ID does not exist" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.828439 4745 scope.go:117] "RemoveContainer" containerID="5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753" Jan 21 11:01:31 crc kubenswrapper[4745]: E0121 11:01:31.830394 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753\": container with ID starting with 5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753 not found: ID does not exist" containerID="5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.830453 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753"} err="failed to get container status \"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753\": rpc error: code = NotFound desc = could not find container \"5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753\": container with ID starting with 5d6851e9d8d09a6d4f6d79e14291c21590d835458a50a792b0248621fbc39753 not found: ID does not exist" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.850924 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a0e1441f-0704-4a1d-a961-8ecd9c24d40f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.854848 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952625 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952679 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn6ll\" (UniqueName: \"kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952766 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952782 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952838 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952934 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.952963 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts\") pod \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\" (UID: \"30fde3a0-bfde-4879-8e2f-fd5a9066b377\") " Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.953612 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.953952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.954593 4745 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.954620 4745 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/30fde3a0-bfde-4879-8e2f-fd5a9066b377-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.963398 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll" (OuterVolumeSpecName: "kube-api-access-dn6ll") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "kube-api-access-dn6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:31 crc kubenswrapper[4745]: I0121 11:01:31.981832 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts" (OuterVolumeSpecName: "scripts") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.056233 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.056260 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn6ll\" (UniqueName: \"kubernetes.io/projected/30fde3a0-bfde-4879-8e2f-fd5a9066b377-kube-api-access-dn6ll\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.079301 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.091277 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.118159 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.150601 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.162689 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.162720 4745 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.166722 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167224 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-log" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167246 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-log" Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167263 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-api" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167271 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-api" Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167292 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="sg-core" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167301 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="sg-core" Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167320 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-notification-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167328 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-notification-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167347 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="proxy-httpd" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167354 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="proxy-httpd" Jan 21 11:01:32 crc kubenswrapper[4745]: E0121 11:01:32.167369 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-central-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167377 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-central-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.167595 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="sg-core" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.176218 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-notification-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.176255 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-log" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.176273 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="ceilometer-central-agent" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.176302 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" containerName="proxy-httpd" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.176315 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" containerName="nova-api-api" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.177567 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.180082 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.188638 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.188830 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.188939 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.249680 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data" (OuterVolumeSpecName: "config-data") pod "30fde3a0-bfde-4879-8e2f-fd5a9066b377" (UID: "30fde3a0-bfde-4879-8e2f-fd5a9066b377"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.264800 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.264885 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.264923 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.265031 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqvqs\" (UniqueName: \"kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.265293 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.265467 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.265568 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30fde3a0-bfde-4879-8e2f-fd5a9066b377-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367147 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367203 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367232 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367249 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367291 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqvqs\" (UniqueName: \"kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367350 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.367878 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.371802 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.373113 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.377767 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.380215 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.389515 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqvqs\" (UniqueName: \"kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs\") pod \"nova-api-0\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.618105 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.793834 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.794508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"30fde3a0-bfde-4879-8e2f-fd5a9066b377","Type":"ContainerDied","Data":"a99d44cd8eb3b378b08e8868fb143b04e75d4bc0012bfcfb633bef4eb10fc416"} Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.794668 4745 scope.go:117] "RemoveContainer" containerID="eab60d880237ec1ae529d52a975ed5eaf22dd7cad609a57c3b1f071c91b8aa2f" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.830413 4745 scope.go:117] "RemoveContainer" containerID="0d837b4b6dc0579edf34828d4c9349a63ed870fe82cedd572364ef82fa477815" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.870489 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.907351 4745 scope.go:117] "RemoveContainer" containerID="a595a994a4364ac9c55172494ff1a81834bf90a58e2b3c89050af60127a86016" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.918974 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.926965 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.934244 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.934409 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.937491 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.937766 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.942160 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 21 11:01:32 crc kubenswrapper[4745]: I0121 11:01:32.954177 4745 scope.go:117] "RemoveContainer" containerID="bc0b8277eb95ee7518305e5342fe22b592cb5fdea835c6d6c6d50f5f4b119b82" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.085917 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-config-data\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.085964 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-scripts\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086008 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086027 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086069 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-log-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086100 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086135 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-run-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.086182 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8xhw\" (UniqueName: \"kubernetes.io/projected/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-kube-api-access-j8xhw\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.122047 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:33 crc kubenswrapper[4745]: W0121 11:01:33.125861 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5416be97_79ce_41b4_991c_816d035a969f.slice/crio-38542bfd51b128b411c07dcc213278a8c527f1673231591ca301093de6492141 WatchSource:0}: Error finding container 38542bfd51b128b411c07dcc213278a8c527f1673231591ca301093de6492141: Status 404 returned error can't find the container with id 38542bfd51b128b411c07dcc213278a8c527f1673231591ca301093de6492141 Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8xhw\" (UniqueName: \"kubernetes.io/projected/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-kube-api-access-j8xhw\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188503 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-config-data\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188652 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-scripts\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188706 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188729 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188762 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-log-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188796 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.188824 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-run-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.189522 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-run-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.190739 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-log-httpd\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.196488 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.201112 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-config-data\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.201657 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.205893 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.208764 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-scripts\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.212694 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8xhw\" (UniqueName: \"kubernetes.io/projected/a3f51f01-ad12-40ab-a599-bca8a2eb5cec-kube-api-access-j8xhw\") pod \"ceilometer-0\" (UID: \"a3f51f01-ad12-40ab-a599-bca8a2eb5cec\") " pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.262111 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.762989 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.827493 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3f51f01-ad12-40ab-a599-bca8a2eb5cec","Type":"ContainerStarted","Data":"aef1349ea9d8a219f186acc8f049d56293f7b5d254ac23130f62b92969e6e579"} Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.832791 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerStarted","Data":"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732"} Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.832834 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerStarted","Data":"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95"} Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.832844 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerStarted","Data":"38542bfd51b128b411c07dcc213278a8c527f1673231591ca301093de6492141"} Jan 21 11:01:33 crc kubenswrapper[4745]: I0121 11:01:33.875380 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.8753590820000001 podStartE2EDuration="1.875359082s" podCreationTimestamp="2026-01-21 11:01:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:33.851942724 +0000 UTC m=+1478.312730322" watchObservedRunningTime="2026-01-21 11:01:33.875359082 +0000 UTC m=+1478.336146670" Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.016842 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30fde3a0-bfde-4879-8e2f-fd5a9066b377" path="/var/lib/kubelet/pods/30fde3a0-bfde-4879-8e2f-fd5a9066b377/volumes" Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.017933 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0e1441f-0704-4a1d-a961-8ecd9c24d40f" path="/var/lib/kubelet/pods/a0e1441f-0704-4a1d-a961-8ecd9c24d40f/volumes" Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.261590 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.322878 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.323176 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="dnsmasq-dns" containerID="cri-o://6b72a73e9bcff2596bc36085654c662c10f720d6a67844c34a8d84713cabf081" gracePeriod=10 Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.865327 4745 generic.go:334] "Generic (PLEG): container finished" podID="8be62d87-2c41-42a9-8327-ca29301a4361" containerID="6b72a73e9bcff2596bc36085654c662c10f720d6a67844c34a8d84713cabf081" exitCode=0 Jan 21 11:01:34 crc kubenswrapper[4745]: I0121 11:01:34.867399 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" event={"ID":"8be62d87-2c41-42a9-8327-ca29301a4361","Type":"ContainerDied","Data":"6b72a73e9bcff2596bc36085654c662c10f720d6a67844c34a8d84713cabf081"} Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.210476 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365314 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365379 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365622 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.365710 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5t7\" (UniqueName: \"kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7\") pod \"8be62d87-2c41-42a9-8327-ca29301a4361\" (UID: \"8be62d87-2c41-42a9-8327-ca29301a4361\") " Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.373163 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7" (OuterVolumeSpecName: "kube-api-access-mt5t7") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "kube-api-access-mt5t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.421937 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config" (OuterVolumeSpecName: "config") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.429121 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.431640 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.432049 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.468977 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.469057 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.469092 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5t7\" (UniqueName: \"kubernetes.io/projected/8be62d87-2c41-42a9-8327-ca29301a4361-kube-api-access-mt5t7\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.469106 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.469114 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.473060 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8be62d87-2c41-42a9-8327-ca29301a4361" (UID: "8be62d87-2c41-42a9-8327-ca29301a4361"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.571262 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8be62d87-2c41-42a9-8327-ca29301a4361-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.876989 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.876982 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-pbrww" event={"ID":"8be62d87-2c41-42a9-8327-ca29301a4361","Type":"ContainerDied","Data":"22ab24047920903a02d5b6c5f0d79593be7e77a9585c34f69cb6cd6ad635ab43"} Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.877403 4745 scope.go:117] "RemoveContainer" containerID="6b72a73e9bcff2596bc36085654c662c10f720d6a67844c34a8d84713cabf081" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.880844 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3f51f01-ad12-40ab-a599-bca8a2eb5cec","Type":"ContainerStarted","Data":"be05ddcf0b49875fa9365bdf2953b152d576d887573347e733302fccd8fdb35b"} Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.880881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3f51f01-ad12-40ab-a599-bca8a2eb5cec","Type":"ContainerStarted","Data":"89f631ca138e120455cf2620b82512e883eea609edb95aaf0811e346cb9d6f87"} Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.897401 4745 scope.go:117] "RemoveContainer" containerID="f8a647107f4c2d5c6236275f6144c3c7a97c133fb9ff8b8a9cd48dd93dd960ff" Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.919865 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:01:35 crc kubenswrapper[4745]: I0121 11:01:35.947784 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-pbrww"] Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.013577 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" path="/var/lib/kubelet/pods/8be62d87-2c41-42a9-8327-ca29301a4361/volumes" Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.056327 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.087300 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.181860 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.181925 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.890803 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3f51f01-ad12-40ab-a599-bca8a2eb5cec","Type":"ContainerStarted","Data":"a844c629a789f7db4dd05089dfe89890533d32bfe015a7e948f9c50042c54b47"} Jan 21 11:01:36 crc kubenswrapper[4745]: I0121 11:01:36.915655 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.150591 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-42nll"] Jan 21 11:01:37 crc kubenswrapper[4745]: E0121 11:01:37.151114 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="dnsmasq-dns" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.151134 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="dnsmasq-dns" Jan 21 11:01:37 crc kubenswrapper[4745]: E0121 11:01:37.151150 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="init" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.151158 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="init" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.151348 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8be62d87-2c41-42a9-8327-ca29301a4361" containerName="dnsmasq-dns" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.152005 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.154561 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.162275 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-42nll"] Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.170806 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.199274 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.199566 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.310558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.310643 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhh2q\" (UniqueName: \"kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.310756 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.310778 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.412786 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.412844 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhh2q\" (UniqueName: \"kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.412926 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.412943 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.418971 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.419346 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.431337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.439636 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhh2q\" (UniqueName: \"kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q\") pod \"nova-cell1-cell-mapping-42nll\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:37 crc kubenswrapper[4745]: I0121 11:01:37.476020 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.071613 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-42nll"] Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.936443 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-42nll" event={"ID":"7e444136-6476-4a25-b073-4f5e276fe173","Type":"ContainerStarted","Data":"59fa18fd04441fe640b53db2e68d59f99997ebaf8671b75549fdec50606a545b"} Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.936867 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-42nll" event={"ID":"7e444136-6476-4a25-b073-4f5e276fe173","Type":"ContainerStarted","Data":"008d39f19b08b8292336804b1dcd74875fedbc5cd429cac6b534b51389d6243a"} Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.940064 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a3f51f01-ad12-40ab-a599-bca8a2eb5cec","Type":"ContainerStarted","Data":"b89931db878d554100d2ca2c3dc6dab94b3cd74e9f9e135dc9031877d1cd6e54"} Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.940295 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.958697 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-42nll" podStartSLOduration=1.958680266 podStartE2EDuration="1.958680266s" podCreationTimestamp="2026-01-21 11:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:38.951745276 +0000 UTC m=+1483.412532874" watchObservedRunningTime="2026-01-21 11:01:38.958680266 +0000 UTC m=+1483.419467864" Jan 21 11:01:38 crc kubenswrapper[4745]: I0121 11:01:38.984792 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.929829775 podStartE2EDuration="6.984773597s" podCreationTimestamp="2026-01-21 11:01:32 +0000 UTC" firstStartedPulling="2026-01-21 11:01:33.766983687 +0000 UTC m=+1478.227771285" lastFinishedPulling="2026-01-21 11:01:37.821927509 +0000 UTC m=+1482.282715107" observedRunningTime="2026-01-21 11:01:38.979455932 +0000 UTC m=+1483.440243530" watchObservedRunningTime="2026-01-21 11:01:38.984773597 +0000 UTC m=+1483.445561185" Jan 21 11:01:39 crc kubenswrapper[4745]: I0121 11:01:39.711303 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 11:01:40 crc kubenswrapper[4745]: I0121 11:01:40.120887 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 21 11:01:42 crc kubenswrapper[4745]: I0121 11:01:42.619424 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:01:42 crc kubenswrapper[4745]: I0121 11:01:42.620073 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:01:43 crc kubenswrapper[4745]: I0121 11:01:43.632718 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:43 crc kubenswrapper[4745]: I0121 11:01:43.632740 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.216:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:01:43 crc kubenswrapper[4745]: I0121 11:01:43.984343 4745 generic.go:334] "Generic (PLEG): container finished" podID="7e444136-6476-4a25-b073-4f5e276fe173" containerID="59fa18fd04441fe640b53db2e68d59f99997ebaf8671b75549fdec50606a545b" exitCode=0 Jan 21 11:01:43 crc kubenswrapper[4745]: I0121 11:01:43.984411 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-42nll" event={"ID":"7e444136-6476-4a25-b073-4f5e276fe173","Type":"ContainerDied","Data":"59fa18fd04441fe640b53db2e68d59f99997ebaf8671b75549fdec50606a545b"} Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.345781 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.509163 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data\") pod \"7e444136-6476-4a25-b073-4f5e276fe173\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.509372 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts\") pod \"7e444136-6476-4a25-b073-4f5e276fe173\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.509650 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle\") pod \"7e444136-6476-4a25-b073-4f5e276fe173\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.509681 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhh2q\" (UniqueName: \"kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q\") pod \"7e444136-6476-4a25-b073-4f5e276fe173\" (UID: \"7e444136-6476-4a25-b073-4f5e276fe173\") " Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.521940 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q" (OuterVolumeSpecName: "kube-api-access-rhh2q") pod "7e444136-6476-4a25-b073-4f5e276fe173" (UID: "7e444136-6476-4a25-b073-4f5e276fe173"). InnerVolumeSpecName "kube-api-access-rhh2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.522521 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts" (OuterVolumeSpecName: "scripts") pod "7e444136-6476-4a25-b073-4f5e276fe173" (UID: "7e444136-6476-4a25-b073-4f5e276fe173"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.541084 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e444136-6476-4a25-b073-4f5e276fe173" (UID: "7e444136-6476-4a25-b073-4f5e276fe173"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.548069 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data" (OuterVolumeSpecName: "config-data") pod "7e444136-6476-4a25-b073-4f5e276fe173" (UID: "7e444136-6476-4a25-b073-4f5e276fe173"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.611556 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.611766 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.611862 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e444136-6476-4a25-b073-4f5e276fe173-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.611915 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhh2q\" (UniqueName: \"kubernetes.io/projected/7e444136-6476-4a25-b073-4f5e276fe173-kube-api-access-rhh2q\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.867031 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.867394 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.867519 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.868494 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:01:45 crc kubenswrapper[4745]: I0121 11:01:45.868718 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7" gracePeriod=600 Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.013628 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-42nll" Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.013621 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-42nll" event={"ID":"7e444136-6476-4a25-b073-4f5e276fe173","Type":"ContainerDied","Data":"008d39f19b08b8292336804b1dcd74875fedbc5cd429cac6b534b51389d6243a"} Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.013765 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="008d39f19b08b8292336804b1dcd74875fedbc5cd429cac6b534b51389d6243a" Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.018645 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7" exitCode=0 Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.018702 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7"} Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.018730 4745 scope.go:117] "RemoveContainer" containerID="21f1327bc2ef040b6fb6ac8d74d92c5bf542264cab55a4f20977c7ed934dca6b" Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.205426 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.205975 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-log" containerID="cri-o://2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95" gracePeriod=30 Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.206260 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-api" containerID="cri-o://26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732" gracePeriod=30 Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.214084 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.224269 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.224490 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4ecb1759-26cf-453e-ae21-b393c94475df" containerName="nova-scheduler-scheduler" containerID="cri-o://d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089" gracePeriod=30 Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.230851 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.236902 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:46 crc kubenswrapper[4745]: I0121 11:01:46.281318 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:01:47 crc kubenswrapper[4745]: I0121 11:01:47.035843 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908"} Jan 21 11:01:47 crc kubenswrapper[4745]: I0121 11:01:47.061069 4745 generic.go:334] "Generic (PLEG): container finished" podID="5416be97-79ce-41b4-991c-816d035a969f" containerID="2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95" exitCode=143 Jan 21 11:01:47 crc kubenswrapper[4745]: I0121 11:01:47.062771 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerDied","Data":"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95"} Jan 21 11:01:47 crc kubenswrapper[4745]: I0121 11:01:47.078603 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:01:48 crc kubenswrapper[4745]: I0121 11:01:48.070765 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" containerID="cri-o://4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b" gracePeriod=30 Jan 21 11:01:48 crc kubenswrapper[4745]: I0121 11:01:48.070844 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" containerID="cri-o://61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102" gracePeriod=30 Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.082308 4745 generic.go:334] "Generic (PLEG): container finished" podID="9741031c-f343-43b5-95f8-254aac8275ea" containerID="4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b" exitCode=143 Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.082361 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerDied","Data":"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b"} Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.575206 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.691587 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data\") pod \"4ecb1759-26cf-453e-ae21-b393c94475df\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.691981 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle\") pod \"4ecb1759-26cf-453e-ae21-b393c94475df\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.692415 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t664d\" (UniqueName: \"kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d\") pod \"4ecb1759-26cf-453e-ae21-b393c94475df\" (UID: \"4ecb1759-26cf-453e-ae21-b393c94475df\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.704384 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d" (OuterVolumeSpecName: "kube-api-access-t664d") pod "4ecb1759-26cf-453e-ae21-b393c94475df" (UID: "4ecb1759-26cf-453e-ae21-b393c94475df"). InnerVolumeSpecName "kube-api-access-t664d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.711085 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-78cb545d88-xv4bf" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.152:8443: connect: connection refused" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.711182 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.800928 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t664d\" (UniqueName: \"kubernetes.io/projected/4ecb1759-26cf-453e-ae21-b393c94475df-kube-api-access-t664d\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.814080 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ecb1759-26cf-453e-ae21-b393c94475df" (UID: "4ecb1759-26cf-453e-ae21-b393c94475df"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.817183 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data" (OuterVolumeSpecName: "config-data") pod "4ecb1759-26cf-453e-ae21-b393c94475df" (UID: "4ecb1759-26cf-453e-ae21-b393c94475df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.819328 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.902916 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.902953 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqvqs\" (UniqueName: \"kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903029 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903068 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903172 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903247 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs\") pod \"5416be97-79ce-41b4-991c-816d035a969f\" (UID: \"5416be97-79ce-41b4-991c-816d035a969f\") " Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903716 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903740 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ecb1759-26cf-453e-ae21-b393c94475df-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.903775 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs" (OuterVolumeSpecName: "logs") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.906779 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs" (OuterVolumeSpecName: "kube-api-access-hqvqs") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "kube-api-access-hqvqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.933086 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.935277 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data" (OuterVolumeSpecName: "config-data") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.960720 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:49 crc kubenswrapper[4745]: I0121 11:01:49.964736 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5416be97-79ce-41b4-991c-816d035a969f" (UID: "5416be97-79ce-41b4-991c-816d035a969f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005867 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5416be97-79ce-41b4-991c-816d035a969f-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005908 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005957 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqvqs\" (UniqueName: \"kubernetes.io/projected/5416be97-79ce-41b4-991c-816d035a969f-kube-api-access-hqvqs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005968 4745 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005976 4745 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.005989 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5416be97-79ce-41b4-991c-816d035a969f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.151155 4745 generic.go:334] "Generic (PLEG): container finished" podID="5416be97-79ce-41b4-991c-816d035a969f" containerID="26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732" exitCode=0 Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.151280 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerDied","Data":"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732"} Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.151310 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"5416be97-79ce-41b4-991c-816d035a969f","Type":"ContainerDied","Data":"38542bfd51b128b411c07dcc213278a8c527f1673231591ca301093de6492141"} Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.151328 4745 scope.go:117] "RemoveContainer" containerID="26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.151563 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.175039 4745 generic.go:334] "Generic (PLEG): container finished" podID="4ecb1759-26cf-453e-ae21-b393c94475df" containerID="d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089" exitCode=0 Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.175127 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4ecb1759-26cf-453e-ae21-b393c94475df","Type":"ContainerDied","Data":"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089"} Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.175171 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4ecb1759-26cf-453e-ae21-b393c94475df","Type":"ContainerDied","Data":"e1d026903861e5bbe54597483e2aa9aee4564b6e2ba3601913fdf9690d732963"} Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.175299 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.405929 4745 scope.go:117] "RemoveContainer" containerID="2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.431447 4745 scope.go:117] "RemoveContainer" containerID="26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.432001 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732\": container with ID starting with 26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732 not found: ID does not exist" containerID="26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.432055 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732"} err="failed to get container status \"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732\": rpc error: code = NotFound desc = could not find container \"26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732\": container with ID starting with 26f57a5eea9cd9c7f11dc4528b08683c6ef59b42621fcfe424602f5a2b68b732 not found: ID does not exist" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.432088 4745 scope.go:117] "RemoveContainer" containerID="2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.432775 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95\": container with ID starting with 2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95 not found: ID does not exist" containerID="2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.432846 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95"} err="failed to get container status \"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95\": rpc error: code = NotFound desc = could not find container \"2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95\": container with ID starting with 2d5e3fd225b31a310b8c98bdc669553c5327699627503158de6c2aec46eabe95 not found: ID does not exist" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.432874 4745 scope.go:117] "RemoveContainer" containerID="d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.433475 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.453035 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.467935 4745 scope.go:117] "RemoveContainer" containerID="d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.468610 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089\": container with ID starting with d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089 not found: ID does not exist" containerID="d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.468645 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089"} err="failed to get container status \"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089\": rpc error: code = NotFound desc = could not find container \"d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089\": container with ID starting with d63806cb315b784eda234931b90756f3ad84bff130fc74ad6746ffe596d20089 not found: ID does not exist" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.474952 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.475638 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-log" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.475667 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-log" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.475693 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ecb1759-26cf-453e-ae21-b393c94475df" containerName="nova-scheduler-scheduler" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.475703 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ecb1759-26cf-453e-ae21-b393c94475df" containerName="nova-scheduler-scheduler" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.475755 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-api" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.475766 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-api" Jan 21 11:01:50 crc kubenswrapper[4745]: E0121 11:01:50.475782 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e444136-6476-4a25-b073-4f5e276fe173" containerName="nova-manage" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.475790 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e444136-6476-4a25-b073-4f5e276fe173" containerName="nova-manage" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.476074 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-api" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.476101 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ecb1759-26cf-453e-ae21-b393c94475df" containerName="nova-scheduler-scheduler" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.476124 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e444136-6476-4a25-b073-4f5e276fe173" containerName="nova-manage" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.476154 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5416be97-79ce-41b4-991c-816d035a969f" containerName="nova-api-log" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.477653 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.481078 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.501340 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.520765 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.528137 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.545348 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.547337 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.549115 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.549156 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.549718 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.569713 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.623548 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.623639 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7dnw\" (UniqueName: \"kubernetes.io/projected/1c9795b0-c473-4536-94e3-64e5dd44f230-kube-api-access-q7dnw\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.623696 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-config-data\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725115 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970d824e-3226-4ab9-a661-b1185dfe5dff-logs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725475 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-config-data\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725515 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-public-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725639 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vscrw\" (UniqueName: \"kubernetes.io/projected/970d824e-3226-4ab9-a661-b1185dfe5dff-kube-api-access-vscrw\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725663 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725699 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725794 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725841 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-config-data\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.725958 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7dnw\" (UniqueName: \"kubernetes.io/projected/1c9795b0-c473-4536-94e3-64e5dd44f230-kube-api-access-q7dnw\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.731306 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.740765 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c9795b0-c473-4536-94e3-64e5dd44f230-config-data\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.752844 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7dnw\" (UniqueName: \"kubernetes.io/projected/1c9795b0-c473-4536-94e3-64e5dd44f230-kube-api-access-q7dnw\") pod \"nova-scheduler-0\" (UID: \"1c9795b0-c473-4536-94e3-64e5dd44f230\") " pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.808864 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.828903 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.828979 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-config-data\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.829101 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970d824e-3226-4ab9-a661-b1185dfe5dff-logs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.829144 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-public-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.829217 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vscrw\" (UniqueName: \"kubernetes.io/projected/970d824e-3226-4ab9-a661-b1185dfe5dff-kube-api-access-vscrw\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.829253 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.831085 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/970d824e-3226-4ab9-a661-b1185dfe5dff-logs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.835075 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-public-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.836313 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-internal-tls-certs\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.839652 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-config-data\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.850336 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970d824e-3226-4ab9-a661-b1185dfe5dff-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.855314 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vscrw\" (UniqueName: \"kubernetes.io/projected/970d824e-3226-4ab9-a661-b1185dfe5dff-kube-api-access-vscrw\") pod \"nova-api-0\" (UID: \"970d824e-3226-4ab9-a661-b1185dfe5dff\") " pod="openstack/nova-api-0" Jan 21 11:01:50 crc kubenswrapper[4745]: I0121 11:01:50.870385 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.203509 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:55386->10.217.0.214:8775: read: connection reset by peer" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.203607 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.214:8775/\": read tcp 10.217.0.2:55394->10.217.0.214:8775: read: connection reset by peer" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.355741 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.510351 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.739904 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.860413 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs\") pod \"9741031c-f343-43b5-95f8-254aac8275ea\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.860502 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs\") pod \"9741031c-f343-43b5-95f8-254aac8275ea\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.860537 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle\") pod \"9741031c-f343-43b5-95f8-254aac8275ea\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.860559 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmt69\" (UniqueName: \"kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69\") pod \"9741031c-f343-43b5-95f8-254aac8275ea\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.860625 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data\") pod \"9741031c-f343-43b5-95f8-254aac8275ea\" (UID: \"9741031c-f343-43b5-95f8-254aac8275ea\") " Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.861112 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs" (OuterVolumeSpecName: "logs") pod "9741031c-f343-43b5-95f8-254aac8275ea" (UID: "9741031c-f343-43b5-95f8-254aac8275ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.861207 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9741031c-f343-43b5-95f8-254aac8275ea-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.878571 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69" (OuterVolumeSpecName: "kube-api-access-xmt69") pod "9741031c-f343-43b5-95f8-254aac8275ea" (UID: "9741031c-f343-43b5-95f8-254aac8275ea"). InnerVolumeSpecName "kube-api-access-xmt69". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.916072 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9741031c-f343-43b5-95f8-254aac8275ea" (UID: "9741031c-f343-43b5-95f8-254aac8275ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.918296 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data" (OuterVolumeSpecName: "config-data") pod "9741031c-f343-43b5-95f8-254aac8275ea" (UID: "9741031c-f343-43b5-95f8-254aac8275ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.942360 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9741031c-f343-43b5-95f8-254aac8275ea" (UID: "9741031c-f343-43b5-95f8-254aac8275ea"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.965963 4745 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.966011 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.966024 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmt69\" (UniqueName: \"kubernetes.io/projected/9741031c-f343-43b5-95f8-254aac8275ea-kube-api-access-xmt69\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.966034 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9741031c-f343-43b5-95f8-254aac8275ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:51 crc kubenswrapper[4745]: I0121 11:01:51.966563 4745 scope.go:117] "RemoveContainer" containerID="f85d7351d2a7805e41089c6c60325c0789e868ec29d97099d6f519c2b65f6b63" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.013424 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ecb1759-26cf-453e-ae21-b393c94475df" path="/var/lib/kubelet/pods/4ecb1759-26cf-453e-ae21-b393c94475df/volumes" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.015160 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5416be97-79ce-41b4-991c-816d035a969f" path="/var/lib/kubelet/pods/5416be97-79ce-41b4-991c-816d035a969f/volumes" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.207635 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c9795b0-c473-4536-94e3-64e5dd44f230","Type":"ContainerStarted","Data":"cf06e8f45135fa51fb93adaaa31dd8cd9723772193777b159e56d2c595566f33"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.207902 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"1c9795b0-c473-4536-94e3-64e5dd44f230","Type":"ContainerStarted","Data":"5d1d40d67d33b7dead4ff867db82287c90e0feafcdc2e51211b63cdefcd734e8"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.213761 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"970d824e-3226-4ab9-a661-b1185dfe5dff","Type":"ContainerStarted","Data":"9dced0812b246846210fcab46bbcb0c651497262b20c98000d476a0b5e4bd9b1"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.213819 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"970d824e-3226-4ab9-a661-b1185dfe5dff","Type":"ContainerStarted","Data":"6c5a47b236625e001daab2527d8eedcbddedc59789c300f545b9d4118259c225"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.213840 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"970d824e-3226-4ab9-a661-b1185dfe5dff","Type":"ContainerStarted","Data":"c3c58f9794aa728e39d4a26d2a4da68cc06b65e7535a823212676de7fc0cc02f"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.226249 4745 generic.go:334] "Generic (PLEG): container finished" podID="9741031c-f343-43b5-95f8-254aac8275ea" containerID="61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102" exitCode=0 Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.226358 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerDied","Data":"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.226436 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9741031c-f343-43b5-95f8-254aac8275ea","Type":"ContainerDied","Data":"fe19053c0b4df984b1bf87a5ba8e4bbb7cc5ec1d5327c7a2e7df770ae75a0ff0"} Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.226461 4745 scope.go:117] "RemoveContainer" containerID="61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.226318 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.233161 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.233139318 podStartE2EDuration="2.233139318s" podCreationTimestamp="2026-01-21 11:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.231391771 +0000 UTC m=+1496.692179379" watchObservedRunningTime="2026-01-21 11:01:52.233139318 +0000 UTC m=+1496.693926936" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.331807 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.331785255 podStartE2EDuration="2.331785255s" podCreationTimestamp="2026-01-21 11:01:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:52.256034259 +0000 UTC m=+1496.716821877" watchObservedRunningTime="2026-01-21 11:01:52.331785255 +0000 UTC m=+1496.792572853" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.343882 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.349290 4745 scope.go:117] "RemoveContainer" containerID="4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.354938 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.366465 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:52 crc kubenswrapper[4745]: E0121 11:01:52.370365 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.370782 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" Jan 21 11:01:52 crc kubenswrapper[4745]: E0121 11:01:52.370823 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.370830 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.372642 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-log" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.372690 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9741031c-f343-43b5-95f8-254aac8275ea" containerName="nova-metadata-metadata" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.374475 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.377654 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.378054 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.383629 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.422832 4745 scope.go:117] "RemoveContainer" containerID="61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102" Jan 21 11:01:52 crc kubenswrapper[4745]: E0121 11:01:52.426086 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102\": container with ID starting with 61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102 not found: ID does not exist" containerID="61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.426131 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102"} err="failed to get container status \"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102\": rpc error: code = NotFound desc = could not find container \"61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102\": container with ID starting with 61c0e6ac016170a901d83c0518b5327227df520eb8736b6a853f04f83265d102 not found: ID does not exist" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.426162 4745 scope.go:117] "RemoveContainer" containerID="4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b" Jan 21 11:01:52 crc kubenswrapper[4745]: E0121 11:01:52.426565 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b\": container with ID starting with 4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b not found: ID does not exist" containerID="4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.426581 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b"} err="failed to get container status \"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b\": rpc error: code = NotFound desc = could not find container \"4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b\": container with ID starting with 4b7333c09745cf868dbe18ea315b065bef46d5c70049a98e53001f9b0990665b not found: ID does not exist" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.482192 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-config-data\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.482262 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.482371 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86c2h\" (UniqueName: \"kubernetes.io/projected/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-kube-api-access-86c2h\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.482392 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.482448 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-logs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.588403 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86c2h\" (UniqueName: \"kubernetes.io/projected/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-kube-api-access-86c2h\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.588446 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.588509 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-logs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.588607 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-config-data\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.588631 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.589409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-logs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.612242 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-config-data\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.613971 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86c2h\" (UniqueName: \"kubernetes.io/projected/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-kube-api-access-86c2h\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.616009 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.628029 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c\") " pod="openstack/nova-metadata-0" Jan 21 11:01:52 crc kubenswrapper[4745]: I0121 11:01:52.714691 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 21 11:01:53 crc kubenswrapper[4745]: I0121 11:01:53.217503 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 21 11:01:53 crc kubenswrapper[4745]: W0121 11:01:53.220036 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod30f95a5f_17bd_4e0b_be4c_4fe9d5528d4c.slice/crio-befa6d12a7c06fd3f0295b32fd267d15b5e742edaebeeb7e3d4d8a0fbfb6385f WatchSource:0}: Error finding container befa6d12a7c06fd3f0295b32fd267d15b5e742edaebeeb7e3d4d8a0fbfb6385f: Status 404 returned error can't find the container with id befa6d12a7c06fd3f0295b32fd267d15b5e742edaebeeb7e3d4d8a0fbfb6385f Jan 21 11:01:53 crc kubenswrapper[4745]: I0121 11:01:53.240353 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c","Type":"ContainerStarted","Data":"befa6d12a7c06fd3f0295b32fd267d15b5e742edaebeeb7e3d4d8a0fbfb6385f"} Jan 21 11:01:54 crc kubenswrapper[4745]: I0121 11:01:54.013991 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9741031c-f343-43b5-95f8-254aac8275ea" path="/var/lib/kubelet/pods/9741031c-f343-43b5-95f8-254aac8275ea/volumes" Jan 21 11:01:54 crc kubenswrapper[4745]: I0121 11:01:54.250841 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c","Type":"ContainerStarted","Data":"0b9095b318d0bdfd5dd96c1659b509b5b3af691eab41b8dbc42cdb37034c6ea9"} Jan 21 11:01:54 crc kubenswrapper[4745]: I0121 11:01:54.250926 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c","Type":"ContainerStarted","Data":"848bf2fce2223273f76ec15167974d4dedd52cc36230464a6241a407c5ecc592"} Jan 21 11:01:54 crc kubenswrapper[4745]: I0121 11:01:54.277921 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.27790142 podStartE2EDuration="2.27790142s" podCreationTimestamp="2026-01-21 11:01:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:01:54.272888684 +0000 UTC m=+1498.733676292" watchObservedRunningTime="2026-01-21 11:01:54.27790142 +0000 UTC m=+1498.738689028" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.000810 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.148295 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.148691 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.148756 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.148808 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.148848 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.149457 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.149519 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5v2s\" (UniqueName: \"kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s\") pod \"8d2746d8-86a1-412c-8cac-b737fff90886\" (UID: \"8d2746d8-86a1-412c-8cac-b737fff90886\") " Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.149904 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs" (OuterVolumeSpecName: "logs") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.151077 4745 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d2746d8-86a1-412c-8cac-b737fff90886-logs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.155780 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s" (OuterVolumeSpecName: "kube-api-access-g5v2s") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "kube-api-access-g5v2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.156156 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.177615 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data" (OuterVolumeSpecName: "config-data") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.182155 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts" (OuterVolumeSpecName: "scripts") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.202295 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.230491 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "8d2746d8-86a1-412c-8cac-b737fff90886" (UID: "8d2746d8-86a1-412c-8cac-b737fff90886"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252446 4745 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-scripts\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252487 4745 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252499 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8d2746d8-86a1-412c-8cac-b737fff90886-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252507 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252522 4745 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8d2746d8-86a1-412c-8cac-b737fff90886-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.252534 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5v2s\" (UniqueName: \"kubernetes.io/projected/8d2746d8-86a1-412c-8cac-b737fff90886-kube-api-access-g5v2s\") on node \"crc\" DevicePath \"\"" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.263326 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d2746d8-86a1-412c-8cac-b737fff90886" containerID="9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a" exitCode=137 Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.263382 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerDied","Data":"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a"} Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.263404 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78cb545d88-xv4bf" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.263446 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78cb545d88-xv4bf" event={"ID":"8d2746d8-86a1-412c-8cac-b737fff90886","Type":"ContainerDied","Data":"f5057c44b306f9577b4c2b7e2fdd495725e74b778ddc9965be15eb6af5f198b5"} Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.263472 4745 scope.go:117] "RemoveContainer" containerID="167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.320146 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.331220 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78cb545d88-xv4bf"] Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.446659 4745 scope.go:117] "RemoveContainer" containerID="9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.471040 4745 scope.go:117] "RemoveContainer" containerID="167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3" Jan 21 11:01:55 crc kubenswrapper[4745]: E0121 11:01:55.471474 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3\": container with ID starting with 167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3 not found: ID does not exist" containerID="167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.471509 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3"} err="failed to get container status \"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3\": rpc error: code = NotFound desc = could not find container \"167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3\": container with ID starting with 167cc32e632ce57bec4c3177e9ae47e50fd4a8b17f07e56e3ecc087ab1f1d9b3 not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.471536 4745 scope.go:117] "RemoveContainer" containerID="9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a" Jan 21 11:01:55 crc kubenswrapper[4745]: E0121 11:01:55.471804 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a\": container with ID starting with 9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a not found: ID does not exist" containerID="9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.471845 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a"} err="failed to get container status \"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a\": rpc error: code = NotFound desc = could not find container \"9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a\": container with ID starting with 9e8f8aa1a41d964bc0b5b6b3b5d96e939df45105f7235d9567dc53be6867198a not found: ID does not exist" Jan 21 11:01:55 crc kubenswrapper[4745]: I0121 11:01:55.809025 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 21 11:01:56 crc kubenswrapper[4745]: I0121 11:01:56.010970 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" path="/var/lib/kubelet/pods/8d2746d8-86a1-412c-8cac-b737fff90886/volumes" Jan 21 11:01:57 crc kubenswrapper[4745]: I0121 11:01:57.714931 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:01:57 crc kubenswrapper[4745]: I0121 11:01:57.715601 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 21 11:02:00 crc kubenswrapper[4745]: I0121 11:02:00.809602 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 21 11:02:00 crc kubenswrapper[4745]: I0121 11:02:00.871861 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:02:00 crc kubenswrapper[4745]: I0121 11:02:00.871951 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 21 11:02:00 crc kubenswrapper[4745]: I0121 11:02:00.872445 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 21 11:02:01 crc kubenswrapper[4745]: I0121 11:02:01.364277 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 21 11:02:01 crc kubenswrapper[4745]: I0121 11:02:01.895755 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="970d824e-3226-4ab9-a661-b1185dfe5dff" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:02:01 crc kubenswrapper[4745]: I0121 11:02:01.895819 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="970d824e-3226-4ab9-a661-b1185dfe5dff" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:02:02 crc kubenswrapper[4745]: I0121 11:02:02.716029 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:02:02 crc kubenswrapper[4745]: I0121 11:02:02.716105 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 21 11:02:03 crc kubenswrapper[4745]: I0121 11:02:03.277131 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 21 11:02:03 crc kubenswrapper[4745]: I0121 11:02:03.735759 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:02:03 crc kubenswrapper[4745]: I0121 11:02:03.736101 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.221:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.160830 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:10 crc kubenswrapper[4745]: E0121 11:02:10.168063 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168109 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: E0121 11:02:10.168146 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168157 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: E0121 11:02:10.168174 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon-log" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168183 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon-log" Jan 21 11:02:10 crc kubenswrapper[4745]: E0121 11:02:10.168243 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168255 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168733 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168761 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.168791 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon-log" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.169383 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d2746d8-86a1-412c-8cac-b737fff90886" containerName="horizon" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.171016 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.181832 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.371392 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.371808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpjff\" (UniqueName: \"kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.371884 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.473703 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpjff\" (UniqueName: \"kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.473754 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.473860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.474262 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.474334 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.495796 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpjff\" (UniqueName: \"kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff\") pod \"redhat-marketplace-sjlbj\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.516004 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.879691 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.880775 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.880842 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 21 11:02:10 crc kubenswrapper[4745]: I0121 11:02:10.887669 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.012382 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.434947 4745 generic.go:334] "Generic (PLEG): container finished" podID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerID="283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6" exitCode=0 Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.435043 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerDied","Data":"283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6"} Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.435444 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerStarted","Data":"a56b9b667dd351d63e1aac98deb2dbaeb72e09ca97ade1c3acdbf38b9647b5e0"} Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.435823 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 21 11:02:11 crc kubenswrapper[4745]: I0121 11:02:11.446084 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 21 11:02:12 crc kubenswrapper[4745]: I0121 11:02:12.447402 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerStarted","Data":"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1"} Jan 21 11:02:12 crc kubenswrapper[4745]: I0121 11:02:12.721026 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:02:12 crc kubenswrapper[4745]: I0121 11:02:12.726033 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 21 11:02:12 crc kubenswrapper[4745]: I0121 11:02:12.727908 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:02:13 crc kubenswrapper[4745]: I0121 11:02:13.460449 4745 generic.go:334] "Generic (PLEG): container finished" podID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerID="2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1" exitCode=0 Jan 21 11:02:13 crc kubenswrapper[4745]: I0121 11:02:13.460583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerDied","Data":"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1"} Jan 21 11:02:13 crc kubenswrapper[4745]: I0121 11:02:13.474509 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 21 11:02:14 crc kubenswrapper[4745]: I0121 11:02:14.476703 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerStarted","Data":"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828"} Jan 21 11:02:14 crc kubenswrapper[4745]: I0121 11:02:14.536552 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sjlbj" podStartSLOduration=1.896012889 podStartE2EDuration="4.536507055s" podCreationTimestamp="2026-01-21 11:02:10 +0000 UTC" firstStartedPulling="2026-01-21 11:02:11.437325576 +0000 UTC m=+1515.898113174" lastFinishedPulling="2026-01-21 11:02:14.077819742 +0000 UTC m=+1518.538607340" observedRunningTime="2026-01-21 11:02:14.529140574 +0000 UTC m=+1518.989928192" watchObservedRunningTime="2026-01-21 11:02:14.536507055 +0000 UTC m=+1518.997294653" Jan 21 11:02:20 crc kubenswrapper[4745]: I0121 11:02:20.518464 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:20 crc kubenswrapper[4745]: I0121 11:02:20.518974 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:20 crc kubenswrapper[4745]: I0121 11:02:20.564579 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:20 crc kubenswrapper[4745]: I0121 11:02:20.630823 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:20 crc kubenswrapper[4745]: I0121 11:02:20.804679 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:21 crc kubenswrapper[4745]: I0121 11:02:21.955842 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:22 crc kubenswrapper[4745]: I0121 11:02:22.555209 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sjlbj" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="registry-server" containerID="cri-o://c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828" gracePeriod=2 Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.176401 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.188850 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.241380 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities\") pod \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.241518 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpjff\" (UniqueName: \"kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff\") pod \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.241624 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content\") pod \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\" (UID: \"b05aa449-2ad6-4e95-a7b5-ffe933e6d598\") " Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.242293 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities" (OuterVolumeSpecName: "utilities") pod "b05aa449-2ad6-4e95-a7b5-ffe933e6d598" (UID: "b05aa449-2ad6-4e95-a7b5-ffe933e6d598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.271472 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b05aa449-2ad6-4e95-a7b5-ffe933e6d598" (UID: "b05aa449-2ad6-4e95-a7b5-ffe933e6d598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.274739 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff" (OuterVolumeSpecName: "kube-api-access-jpjff") pod "b05aa449-2ad6-4e95-a7b5-ffe933e6d598" (UID: "b05aa449-2ad6-4e95-a7b5-ffe933e6d598"). InnerVolumeSpecName "kube-api-access-jpjff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.344143 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.344394 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpjff\" (UniqueName: \"kubernetes.io/projected/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-kube-api-access-jpjff\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.344490 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b05aa449-2ad6-4e95-a7b5-ffe933e6d598-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.566251 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sjlbj" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.566273 4745 generic.go:334] "Generic (PLEG): container finished" podID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerID="c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828" exitCode=0 Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.566316 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerDied","Data":"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828"} Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.566347 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sjlbj" event={"ID":"b05aa449-2ad6-4e95-a7b5-ffe933e6d598","Type":"ContainerDied","Data":"a56b9b667dd351d63e1aac98deb2dbaeb72e09ca97ade1c3acdbf38b9647b5e0"} Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.566367 4745 scope.go:117] "RemoveContainer" containerID="c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.586768 4745 scope.go:117] "RemoveContainer" containerID="2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.606163 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.623167 4745 scope.go:117] "RemoveContainer" containerID="283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.631924 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sjlbj"] Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.656871 4745 scope.go:117] "RemoveContainer" containerID="c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828" Jan 21 11:02:23 crc kubenswrapper[4745]: E0121 11:02:23.657416 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828\": container with ID starting with c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828 not found: ID does not exist" containerID="c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.657457 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828"} err="failed to get container status \"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828\": rpc error: code = NotFound desc = could not find container \"c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828\": container with ID starting with c89feaa095610790bfd235466d342ec6f1e55520fb223f466d528c46453b2828 not found: ID does not exist" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.657480 4745 scope.go:117] "RemoveContainer" containerID="2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1" Jan 21 11:02:23 crc kubenswrapper[4745]: E0121 11:02:23.657868 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1\": container with ID starting with 2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1 not found: ID does not exist" containerID="2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.657900 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1"} err="failed to get container status \"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1\": rpc error: code = NotFound desc = could not find container \"2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1\": container with ID starting with 2a84192a008bc7681813aa44e629621d99a49f0a20e13ece058df7201793e4b1 not found: ID does not exist" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.657918 4745 scope.go:117] "RemoveContainer" containerID="283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6" Jan 21 11:02:23 crc kubenswrapper[4745]: E0121 11:02:23.658258 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6\": container with ID starting with 283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6 not found: ID does not exist" containerID="283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6" Jan 21 11:02:23 crc kubenswrapper[4745]: I0121 11:02:23.658363 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6"} err="failed to get container status \"283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6\": rpc error: code = NotFound desc = could not find container \"283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6\": container with ID starting with 283796e1a8c3cd602cfbb76b969738be765c069314e05c2e6baa89b2fad20ef6 not found: ID does not exist" Jan 21 11:02:24 crc kubenswrapper[4745]: I0121 11:02:24.024063 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" path="/var/lib/kubelet/pods/b05aa449-2ad6-4e95-a7b5-ffe933e6d598/volumes" Jan 21 11:02:27 crc kubenswrapper[4745]: I0121 11:02:27.878569 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="rabbitmq" containerID="cri-o://963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672" gracePeriod=604795 Jan 21 11:02:28 crc kubenswrapper[4745]: I0121 11:02:28.077586 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="rabbitmq" containerID="cri-o://1c6dbbcee43881f6df4956ed7f9529f8a880205583ac0c54cb141310e5486f4e" gracePeriod=604796 Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.518182 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.612135 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.612743 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.612945 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbv7r\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613070 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613207 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613344 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613466 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613586 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613702 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613833 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.613984 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret\") pod \"4af3b414-a820-42a8-89c4-f9cade535b01\" (UID: \"4af3b414-a820-42a8-89c4-f9cade535b01\") " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.614396 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.614440 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.615258 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.616036 4745 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.616076 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.616091 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.618843 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.628253 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.636224 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r" (OuterVolumeSpecName: "kube-api-access-dbv7r") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "kube-api-access-dbv7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.652994 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.669702 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data" (OuterVolumeSpecName: "config-data") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.682180 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info" (OuterVolumeSpecName: "pod-info") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.706144 4745 generic.go:334] "Generic (PLEG): container finished" podID="4af3b414-a820-42a8-89c4-f9cade535b01" containerID="963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672" exitCode=0 Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.706203 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerDied","Data":"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672"} Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.706229 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4af3b414-a820-42a8-89c4-f9cade535b01","Type":"ContainerDied","Data":"9d42dc478c293ac8e0e9475025367014a9e2a6046c60868f5e366c9f4c4d788d"} Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.706243 4745 scope.go:117] "RemoveContainer" containerID="963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.706364 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.711593 4745 generic.go:334] "Generic (PLEG): container finished" podID="557c4211-e324-49a4-8493-6685e4f5bee8" containerID="1c6dbbcee43881f6df4956ed7f9529f8a880205583ac0c54cb141310e5486f4e" exitCode=0 Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.711619 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerDied","Data":"1c6dbbcee43881f6df4956ed7f9529f8a880205583ac0c54cb141310e5486f4e"} Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725625 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725649 4745 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4af3b414-a820-42a8-89c4-f9cade535b01-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725666 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725676 4745 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4af3b414-a820-42a8-89c4-f9cade535b01-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725709 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbv7r\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-kube-api-access-dbv7r\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.725737 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.808353 4745 scope.go:117] "RemoveContainer" containerID="d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.827746 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.831639 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:34 crc kubenswrapper[4745]: I0121 11:02:34.988837 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.033971 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf" (OuterVolumeSpecName: "server-conf") pod "4af3b414-a820-42a8-89c4-f9cade535b01" (UID: "4af3b414-a820-42a8-89c4-f9cade535b01"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.037721 4745 scope.go:117] "RemoveContainer" containerID="963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.042126 4745 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4af3b414-a820-42a8-89c4-f9cade535b01-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.042161 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4af3b414-a820-42a8-89c4-f9cade535b01-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.053713 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672\": container with ID starting with 963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672 not found: ID does not exist" containerID="963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.053772 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672"} err="failed to get container status \"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672\": rpc error: code = NotFound desc = could not find container \"963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672\": container with ID starting with 963ba5cbc867b57f86f69383c8833f3e6fbffa9b9ae7948d220f890ee25c3672 not found: ID does not exist" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.053805 4745 scope.go:117] "RemoveContainer" containerID="d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.055759 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1\": container with ID starting with d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1 not found: ID does not exist" containerID="d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.055804 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1"} err="failed to get container status \"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1\": rpc error: code = NotFound desc = could not find container \"d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1\": container with ID starting with d301f10048cba676d3b848290438bceacb568b4341caa1dfdca6ba5d4ba6daa1 not found: ID does not exist" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.226286 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.347630 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.347895 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.348099 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6v2t\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.348206 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.348405 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.348510 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.348719 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.349100 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.349219 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.349582 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.349789 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.350233 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.351498 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret\") pod \"557c4211-e324-49a4-8493-6685e4f5bee8\" (UID: \"557c4211-e324-49a4-8493-6685e4f5bee8\") " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.352994 4745 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.355523 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.377098 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.385382 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t" (OuterVolumeSpecName: "kube-api-access-h6v2t") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "kube-api-access-h6v2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.387441 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.388353 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.404423 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.409214 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.410799 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.410824 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info" (OuterVolumeSpecName: "pod-info") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.425710 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data" (OuterVolumeSpecName: "config-data") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.439578 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450547 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450577 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450606 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="registry-server" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450614 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="registry-server" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450654 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="extract-content" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450662 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="extract-content" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450674 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="extract-utilities" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450680 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="extract-utilities" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450689 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="setup-container" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450694 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="setup-container" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450716 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450723 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: E0121 11:02:35.450732 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="setup-container" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.450737 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="setup-container" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.451097 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.451119 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" containerName="rabbitmq" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.451142 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b05aa449-2ad6-4e95-a7b5-ffe933e6d598" containerName="registry-server" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.452147 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.457782 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-nlk4f" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.457944 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.458051 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.458090 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.458152 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.458374 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.458612 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.459783 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.459803 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.459812 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.459832 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.459841 4745 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/557c4211-e324-49a4-8493-6685e4f5bee8-pod-info\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.473912 4745 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/557c4211-e324-49a4-8493-6685e4f5bee8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.473939 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6v2t\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-kube-api-access-h6v2t\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.497590 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.502682 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.557352 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf" (OuterVolumeSpecName: "server-conf") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575660 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575710 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575732 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8027b59-b371-4cd4-b4a1-da4073dc0b61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575752 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575813 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575834 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8027b59-b371-4cd4-b4a1-da4073dc0b61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575871 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdd7\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-kube-api-access-gzdd7\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575930 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575948 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.575973 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.576030 4745 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/557c4211-e324-49a4-8493-6685e4f5bee8-server-conf\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.576040 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.613296 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "557c4211-e324-49a4-8493-6685e4f5bee8" (UID: "557c4211-e324-49a4-8493-6685e4f5bee8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679587 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679710 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679737 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8027b59-b371-4cd4-b4a1-da4073dc0b61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679761 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679830 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679860 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8027b59-b371-4cd4-b4a1-da4073dc0b61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679903 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679933 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdd7\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-kube-api-access-gzdd7\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679973 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.679998 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.680014 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.680032 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.680130 4745 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/557c4211-e324-49a4-8493-6685e4f5bee8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.680639 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.680874 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.693128 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-config-data\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.695424 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b8027b59-b371-4cd4-b4a1-da4073dc0b61-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.706837 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.706882 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b8027b59-b371-4cd4-b4a1-da4073dc0b61-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.708656 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.709574 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.709932 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b8027b59-b371-4cd4-b4a1-da4073dc0b61-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.719449 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdd7\" (UniqueName: \"kubernetes.io/projected/b8027b59-b371-4cd4-b4a1-da4073dc0b61-kube-api-access-gzdd7\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.739175 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"557c4211-e324-49a4-8493-6685e4f5bee8","Type":"ContainerDied","Data":"760432a3e6bc6dd9fa463c641ce89dc33218dc7c8537b9862a6a1ed30c0bba05"} Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.739738 4745 scope.go:117] "RemoveContainer" containerID="1c6dbbcee43881f6df4956ed7f9529f8a880205583ac0c54cb141310e5486f4e" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.739260 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.760831 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b8027b59-b371-4cd4-b4a1-da4073dc0b61\") " pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.778291 4745 scope.go:117] "RemoveContainer" containerID="c6f7996113b4bddd9c946091c6d575b94b2e4d227cbd53bacf0332274d5d275c" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.807369 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.837574 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.848851 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.850407 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.855223 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.856337 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.856933 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.857888 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.858016 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.858610 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rsjwr" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.859043 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.864565 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.918728 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988074 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988131 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988154 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988188 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988216 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988438 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswkh\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-kube-api-access-fswkh\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988481 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988555 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ce38831c-0940-459f-a137-00ce0acbc5bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988602 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988640 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:35 crc kubenswrapper[4745]: I0121 11:02:35.988657 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ce38831c-0940-459f-a137-00ce0acbc5bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.015975 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4af3b414-a820-42a8-89c4-f9cade535b01" path="/var/lib/kubelet/pods/4af3b414-a820-42a8-89c4-f9cade535b01/volumes" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.017723 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="557c4211-e324-49a4-8493-6685e4f5bee8" path="/var/lib/kubelet/pods/557c4211-e324-49a4-8493-6685e4f5bee8/volumes" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091272 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091783 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ce38831c-0940-459f-a137-00ce0acbc5bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091852 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091889 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091911 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091948 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.091976 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.092003 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fswkh\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-kube-api-access-fswkh\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.092356 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.092563 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.092597 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ce38831c-0940-459f-a137-00ce0acbc5bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.092644 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.093152 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.095055 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.095299 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.096966 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.099340 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ce38831c-0940-459f-a137-00ce0acbc5bd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.100399 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ce38831c-0940-459f-a137-00ce0acbc5bd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.118238 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.118321 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.125888 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ce38831c-0940-459f-a137-00ce0acbc5bd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.126715 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fswkh\" (UniqueName: \"kubernetes.io/projected/ce38831c-0940-459f-a137-00ce0acbc5bd-kube-api-access-fswkh\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.185671 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"ce38831c-0940-459f-a137-00ce0acbc5bd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.292383 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.293952 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.314401 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.355292 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.397937 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb8zk\" (UniqueName: \"kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.400303 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.400693 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.400969 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.401241 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.401383 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.401681 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.487417 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.501091 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.503675 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb8zk\" (UniqueName: \"kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.503761 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.503839 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.503914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.506108 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.506219 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.506282 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.507213 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.507581 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.508217 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.508250 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.514821 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.514513 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.552408 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb8zk\" (UniqueName: \"kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk\") pod \"dnsmasq-dns-7d84b4d45c-zkn2k\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.642972 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:36 crc kubenswrapper[4745]: I0121 11:02:36.839944 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8027b59-b371-4cd4-b4a1-da4073dc0b61","Type":"ContainerStarted","Data":"ba9f749099052ce1a1fffc267d327261650a1ff067e72ac2941be798bb294837"} Jan 21 11:02:37 crc kubenswrapper[4745]: I0121 11:02:37.424549 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:02:37 crc kubenswrapper[4745]: I0121 11:02:37.832298 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 21 11:02:37 crc kubenswrapper[4745]: I0121 11:02:37.869582 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ce38831c-0940-459f-a137-00ce0acbc5bd","Type":"ContainerStarted","Data":"bc3773829a8271949ef826afcaab30043025b97d6bc8fecd9dbd9ff7980a2628"} Jan 21 11:02:37 crc kubenswrapper[4745]: I0121 11:02:37.870739 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" event={"ID":"5fa8184e-a731-4987-ab2e-f55aede6cd87","Type":"ContainerStarted","Data":"1123ce083a0bdb8b5e46927a8cc561e7e0209fb104fb9dbc12c8e94a2cfd3545"} Jan 21 11:02:38 crc kubenswrapper[4745]: I0121 11:02:38.880140 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8027b59-b371-4cd4-b4a1-da4073dc0b61","Type":"ContainerStarted","Data":"78832aaed41d639fc627c232af9ad9dfc2638766f39669d49cfe244395e3e3ca"} Jan 21 11:02:38 crc kubenswrapper[4745]: I0121 11:02:38.881727 4745 generic.go:334] "Generic (PLEG): container finished" podID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerID="63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275" exitCode=0 Jan 21 11:02:38 crc kubenswrapper[4745]: I0121 11:02:38.881791 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" event={"ID":"5fa8184e-a731-4987-ab2e-f55aede6cd87","Type":"ContainerDied","Data":"63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275"} Jan 21 11:02:40 crc kubenswrapper[4745]: I0121 11:02:40.901252 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" event={"ID":"5fa8184e-a731-4987-ab2e-f55aede6cd87","Type":"ContainerStarted","Data":"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059"} Jan 21 11:02:40 crc kubenswrapper[4745]: I0121 11:02:40.901664 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:40 crc kubenswrapper[4745]: I0121 11:02:40.902707 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ce38831c-0940-459f-a137-00ce0acbc5bd","Type":"ContainerStarted","Data":"e7cc2c73e4ef47f104023c8bd4d25395cda251fe4a36301888b3a82d5d4060d4"} Jan 21 11:02:40 crc kubenswrapper[4745]: I0121 11:02:40.927588 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" podStartSLOduration=4.927559736 podStartE2EDuration="4.927559736s" podCreationTimestamp="2026-01-21 11:02:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:40.920051221 +0000 UTC m=+1545.380838829" watchObservedRunningTime="2026-01-21 11:02:40.927559736 +0000 UTC m=+1545.388347354" Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.646913 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.727477 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.729083 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="dnsmasq-dns" containerID="cri-o://61e6ac1f12590196a89e51472d813326aa95ffbd3e8e06e8b0cc50efa6a09431" gracePeriod=10 Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.948905 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b556b84c5-rkzsq"] Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.950516 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.987810 4745 generic.go:334] "Generic (PLEG): container finished" podID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerID="61e6ac1f12590196a89e51472d813326aa95ffbd3e8e06e8b0cc50efa6a09431" exitCode=0 Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.988196 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" event={"ID":"9e321695-cccb-4fdf-b1cb-abae2afbfb93","Type":"ContainerDied","Data":"61e6ac1f12590196a89e51472d813326aa95ffbd3e8e06e8b0cc50efa6a09431"} Jan 21 11:02:46 crc kubenswrapper[4745]: I0121 11:02:46.992286 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b556b84c5-rkzsq"] Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079291 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-config\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079351 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-nb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079374 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkxkb\" (UniqueName: \"kubernetes.io/projected/7eea2f90-9946-4eba-8eb8-f9e00472f0be-kube-api-access-lkxkb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079403 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-sb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079445 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-swift-storage-0\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079516 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-openstack-edpm-ipam\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.079698 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-svc\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.185264 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-sb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.185349 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-swift-storage-0\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.186690 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-openstack-edpm-ipam\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.186997 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-svc\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.187090 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-config\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.187171 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-nb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.187178 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-sb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.187205 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkxkb\" (UniqueName: \"kubernetes.io/projected/7eea2f90-9946-4eba-8eb8-f9e00472f0be-kube-api-access-lkxkb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.187304 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-swift-storage-0\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.188011 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-openstack-edpm-ipam\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.188611 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-config\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.190775 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-ovsdbserver-nb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.198566 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7eea2f90-9946-4eba-8eb8-f9e00472f0be-dns-svc\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.228252 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkxkb\" (UniqueName: \"kubernetes.io/projected/7eea2f90-9946-4eba-8eb8-f9e00472f0be-kube-api-access-lkxkb\") pod \"dnsmasq-dns-b556b84c5-rkzsq\" (UID: \"7eea2f90-9946-4eba-8eb8-f9e00472f0be\") " pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.270698 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.293793 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390647 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390720 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390755 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390826 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390881 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqsjn\" (UniqueName: \"kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.390947 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config\") pod \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\" (UID: \"9e321695-cccb-4fdf-b1cb-abae2afbfb93\") " Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.425275 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn" (OuterVolumeSpecName: "kube-api-access-kqsjn") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "kube-api-access-kqsjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.451309 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.465157 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.469187 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config" (OuterVolumeSpecName: "config") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.488931 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.492901 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.492931 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.492945 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.492954 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.492967 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqsjn\" (UniqueName: \"kubernetes.io/projected/9e321695-cccb-4fdf-b1cb-abae2afbfb93-kube-api-access-kqsjn\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.519884 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9e321695-cccb-4fdf-b1cb-abae2afbfb93" (UID: "9e321695-cccb-4fdf-b1cb-abae2afbfb93"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.594635 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9e321695-cccb-4fdf-b1cb-abae2afbfb93-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:47 crc kubenswrapper[4745]: I0121 11:02:47.782490 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b556b84c5-rkzsq"] Jan 21 11:02:47 crc kubenswrapper[4745]: W0121 11:02:47.786826 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7eea2f90_9946_4eba_8eb8_f9e00472f0be.slice/crio-efaeafa8a4a26a5bba14814b7bad1616211113a90fc15987bb354a85a28a8d8c WatchSource:0}: Error finding container efaeafa8a4a26a5bba14814b7bad1616211113a90fc15987bb354a85a28a8d8c: Status 404 returned error can't find the container with id efaeafa8a4a26a5bba14814b7bad1616211113a90fc15987bb354a85a28a8d8c Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.003124 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.013813 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-d72v9" event={"ID":"9e321695-cccb-4fdf-b1cb-abae2afbfb93","Type":"ContainerDied","Data":"28b5f8dc3854ad1b9243ca6ff7a22b60ce1d904c5706b8fb0bb8188bfb2206b4"} Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.014080 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" event={"ID":"7eea2f90-9946-4eba-8eb8-f9e00472f0be","Type":"ContainerStarted","Data":"efaeafa8a4a26a5bba14814b7bad1616211113a90fc15987bb354a85a28a8d8c"} Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.014172 4745 scope.go:117] "RemoveContainer" containerID="61e6ac1f12590196a89e51472d813326aa95ffbd3e8e06e8b0cc50efa6a09431" Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.043955 4745 scope.go:117] "RemoveContainer" containerID="f16449f2a2b8ba59b532abc0ec940b20ca498aac49deaa755572615a5c2d7b8d" Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.057699 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:02:48 crc kubenswrapper[4745]: I0121 11:02:48.066948 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-d72v9"] Jan 21 11:02:49 crc kubenswrapper[4745]: I0121 11:02:49.023946 4745 generic.go:334] "Generic (PLEG): container finished" podID="7eea2f90-9946-4eba-8eb8-f9e00472f0be" containerID="ff3cc8bd1469509cc73eb4ceb1921cd643f6f1eb39403d33e70ebf0b247cd8c3" exitCode=0 Jan 21 11:02:49 crc kubenswrapper[4745]: I0121 11:02:49.024729 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" event={"ID":"7eea2f90-9946-4eba-8eb8-f9e00472f0be","Type":"ContainerDied","Data":"ff3cc8bd1469509cc73eb4ceb1921cd643f6f1eb39403d33e70ebf0b247cd8c3"} Jan 21 11:02:50 crc kubenswrapper[4745]: I0121 11:02:50.018352 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" path="/var/lib/kubelet/pods/9e321695-cccb-4fdf-b1cb-abae2afbfb93/volumes" Jan 21 11:02:50 crc kubenswrapper[4745]: I0121 11:02:50.037611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" event={"ID":"7eea2f90-9946-4eba-8eb8-f9e00472f0be","Type":"ContainerStarted","Data":"e64ac8b3c7a9795ef857b19b65c04f22e912fb0561695b6b8fff7827cfff4cfe"} Jan 21 11:02:50 crc kubenswrapper[4745]: I0121 11:02:50.037910 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:50 crc kubenswrapper[4745]: I0121 11:02:50.063661 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" podStartSLOduration=4.063643988 podStartE2EDuration="4.063643988s" podCreationTimestamp="2026-01-21 11:02:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:02:50.061304224 +0000 UTC m=+1554.522091832" watchObservedRunningTime="2026-01-21 11:02:50.063643988 +0000 UTC m=+1554.524431586" Jan 21 11:02:52 crc kubenswrapper[4745]: I0121 11:02:52.425375 4745 scope.go:117] "RemoveContainer" containerID="2406ae50264187dce315a4b62fadd851442d2a86b880bd3994da31a4c582aaf0" Jan 21 11:02:52 crc kubenswrapper[4745]: I0121 11:02:52.474410 4745 scope.go:117] "RemoveContainer" containerID="6bca8a3f9db747f7e36c5c7ed91e6af7408d39817c37286f426a1898f2be45c1" Jan 21 11:02:57 crc kubenswrapper[4745]: I0121 11:02:57.279664 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b556b84c5-rkzsq" Jan 21 11:02:57 crc kubenswrapper[4745]: I0121 11:02:57.371843 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:02:57 crc kubenswrapper[4745]: I0121 11:02:57.372215 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="dnsmasq-dns" containerID="cri-o://bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059" gracePeriod=10 Jan 21 11:02:57 crc kubenswrapper[4745]: I0121 11:02:57.884861 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016017 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016150 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016236 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb8zk\" (UniqueName: \"kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016258 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016298 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.016345 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb\") pod \"5fa8184e-a731-4987-ab2e-f55aede6cd87\" (UID: \"5fa8184e-a731-4987-ab2e-f55aede6cd87\") " Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.021847 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk" (OuterVolumeSpecName: "kube-api-access-sb8zk") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "kube-api-access-sb8zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.075878 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.081944 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.082791 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.088178 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.096342 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config" (OuterVolumeSpecName: "config") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.116264 4745 generic.go:334] "Generic (PLEG): container finished" podID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerID="bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059" exitCode=0 Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.116403 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.116416 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" event={"ID":"5fa8184e-a731-4987-ab2e-f55aede6cd87","Type":"ContainerDied","Data":"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059"} Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.117524 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-zkn2k" event={"ID":"5fa8184e-a731-4987-ab2e-f55aede6cd87","Type":"ContainerDied","Data":"1123ce083a0bdb8b5e46927a8cc561e7e0209fb104fb9dbc12c8e94a2cfd3545"} Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.117554 4745 scope.go:117] "RemoveContainer" containerID="bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119371 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119388 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119397 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb8zk\" (UniqueName: \"kubernetes.io/projected/5fa8184e-a731-4987-ab2e-f55aede6cd87-kube-api-access-sb8zk\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119407 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119415 4745 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.119423 4745 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.135881 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5fa8184e-a731-4987-ab2e-f55aede6cd87" (UID: "5fa8184e-a731-4987-ab2e-f55aede6cd87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.143621 4745 scope.go:117] "RemoveContainer" containerID="63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.161618 4745 scope.go:117] "RemoveContainer" containerID="bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059" Jan 21 11:02:58 crc kubenswrapper[4745]: E0121 11:02:58.162070 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059\": container with ID starting with bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059 not found: ID does not exist" containerID="bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.162110 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059"} err="failed to get container status \"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059\": rpc error: code = NotFound desc = could not find container \"bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059\": container with ID starting with bf945943d7a393046e152e91bf754d4c3f26a06b6659f2efcf03657b7b9a6059 not found: ID does not exist" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.162138 4745 scope.go:117] "RemoveContainer" containerID="63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275" Jan 21 11:02:58 crc kubenswrapper[4745]: E0121 11:02:58.162472 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275\": container with ID starting with 63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275 not found: ID does not exist" containerID="63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.162551 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275"} err="failed to get container status \"63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275\": rpc error: code = NotFound desc = could not find container \"63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275\": container with ID starting with 63fc58a881590808f9e9eaaffb26c000feb4e6d280b2199ed238fd9fe5a51275 not found: ID does not exist" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.220772 4745 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5fa8184e-a731-4987-ab2e-f55aede6cd87-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.463018 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:02:58 crc kubenswrapper[4745]: I0121 11:02:58.473730 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-zkn2k"] Jan 21 11:03:00 crc kubenswrapper[4745]: I0121 11:03:00.013808 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" path="/var/lib/kubelet/pods/5fa8184e-a731-4987-ab2e-f55aede6cd87/volumes" Jan 21 11:03:11 crc kubenswrapper[4745]: I0121 11:03:11.258937 4745 generic.go:334] "Generic (PLEG): container finished" podID="b8027b59-b371-4cd4-b4a1-da4073dc0b61" containerID="78832aaed41d639fc627c232af9ad9dfc2638766f39669d49cfe244395e3e3ca" exitCode=0 Jan 21 11:03:11 crc kubenswrapper[4745]: I0121 11:03:11.259021 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8027b59-b371-4cd4-b4a1-da4073dc0b61","Type":"ContainerDied","Data":"78832aaed41d639fc627c232af9ad9dfc2638766f39669d49cfe244395e3e3ca"} Jan 21 11:03:12 crc kubenswrapper[4745]: I0121 11:03:12.275248 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b8027b59-b371-4cd4-b4a1-da4073dc0b61","Type":"ContainerStarted","Data":"3055f6106e55a6fd8171de8f73b24288a717c24557b5b929254aa35eb5a95ebb"} Jan 21 11:03:12 crc kubenswrapper[4745]: I0121 11:03:12.276105 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 21 11:03:13 crc kubenswrapper[4745]: I0121 11:03:13.283186 4745 generic.go:334] "Generic (PLEG): container finished" podID="ce38831c-0940-459f-a137-00ce0acbc5bd" containerID="e7cc2c73e4ef47f104023c8bd4d25395cda251fe4a36301888b3a82d5d4060d4" exitCode=0 Jan 21 11:03:13 crc kubenswrapper[4745]: I0121 11:03:13.283286 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ce38831c-0940-459f-a137-00ce0acbc5bd","Type":"ContainerDied","Data":"e7cc2c73e4ef47f104023c8bd4d25395cda251fe4a36301888b3a82d5d4060d4"} Jan 21 11:03:13 crc kubenswrapper[4745]: I0121 11:03:13.350672 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.350649713 podStartE2EDuration="38.350649713s" podCreationTimestamp="2026-01-21 11:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:03:12.32318945 +0000 UTC m=+1576.783977058" watchObservedRunningTime="2026-01-21 11:03:13.350649713 +0000 UTC m=+1577.811437301" Jan 21 11:03:14 crc kubenswrapper[4745]: I0121 11:03:14.297115 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"ce38831c-0940-459f-a137-00ce0acbc5bd","Type":"ContainerStarted","Data":"3cdcdbcd6e4cc758c5fd126cc11de88fac42e18d1ff939adf21d092218b81d02"} Jan 21 11:03:14 crc kubenswrapper[4745]: I0121 11:03:14.298163 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:03:14 crc kubenswrapper[4745]: I0121 11:03:14.330084 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.330061934 podStartE2EDuration="39.330061934s" podCreationTimestamp="2026-01-21 11:02:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:03:14.322508428 +0000 UTC m=+1578.783296046" watchObservedRunningTime="2026-01-21 11:03:14.330061934 +0000 UTC m=+1578.790849542" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.024709 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv"] Jan 21 11:03:16 crc kubenswrapper[4745]: E0121 11:03:16.025550 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025567 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: E0121 11:03:16.025596 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="init" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025604 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="init" Jan 21 11:03:16 crc kubenswrapper[4745]: E0121 11:03:16.025638 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="init" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025647 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="init" Jan 21 11:03:16 crc kubenswrapper[4745]: E0121 11:03:16.025663 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025670 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025910 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e321695-cccb-4fdf-b1cb-abae2afbfb93" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.025935 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa8184e-a731-4987-ab2e-f55aede6cd87" containerName="dnsmasq-dns" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.026783 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.029720 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.031306 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.031412 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.033455 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.048059 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv"] Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.101010 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.101264 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.101663 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tcrh\" (UniqueName: \"kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.101755 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.203692 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.203748 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.203856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tcrh\" (UniqueName: \"kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.203886 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.212461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.217134 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.225066 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.225819 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tcrh\" (UniqueName: \"kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:16 crc kubenswrapper[4745]: I0121 11:03:16.432398 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:03:17 crc kubenswrapper[4745]: I0121 11:03:17.246117 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv"] Jan 21 11:03:17 crc kubenswrapper[4745]: W0121 11:03:17.259624 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ac74398_cfce_4a36_998c_057d617fe478.slice/crio-d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059 WatchSource:0}: Error finding container d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059: Status 404 returned error can't find the container with id d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059 Jan 21 11:03:17 crc kubenswrapper[4745]: I0121 11:03:17.320448 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" event={"ID":"0ac74398-cfce-4a36-998c-057d617fe478","Type":"ContainerStarted","Data":"d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059"} Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.620753 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.624017 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.659177 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.757897 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.758008 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbg2l\" (UniqueName: \"kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.758109 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.859404 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.859503 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbg2l\" (UniqueName: \"kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.859585 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.859946 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.859996 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.879120 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbg2l\" (UniqueName: \"kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l\") pod \"certified-operators-5x4xz\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:18 crc kubenswrapper[4745]: I0121 11:03:18.946333 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:19 crc kubenswrapper[4745]: I0121 11:03:19.500699 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:20 crc kubenswrapper[4745]: I0121 11:03:20.354866 4745 generic.go:334] "Generic (PLEG): container finished" podID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerID="b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff" exitCode=0 Jan 21 11:03:20 crc kubenswrapper[4745]: I0121 11:03:20.355066 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerDied","Data":"b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff"} Jan 21 11:03:20 crc kubenswrapper[4745]: I0121 11:03:20.355309 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerStarted","Data":"98ae843788163d01672fcc7a9e836031861e1ee21b84395de73b7f8d191f711c"} Jan 21 11:03:22 crc kubenswrapper[4745]: I0121 11:03:22.374891 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerStarted","Data":"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132"} Jan 21 11:03:23 crc kubenswrapper[4745]: I0121 11:03:23.391391 4745 generic.go:334] "Generic (PLEG): container finished" podID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerID="5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132" exitCode=0 Jan 21 11:03:23 crc kubenswrapper[4745]: I0121 11:03:23.391676 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerDied","Data":"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132"} Jan 21 11:03:24 crc kubenswrapper[4745]: I0121 11:03:24.403310 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerStarted","Data":"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f"} Jan 21 11:03:24 crc kubenswrapper[4745]: I0121 11:03:24.432538 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5x4xz" podStartSLOduration=3.008775681 podStartE2EDuration="6.432501401s" podCreationTimestamp="2026-01-21 11:03:18 +0000 UTC" firstStartedPulling="2026-01-21 11:03:20.358853077 +0000 UTC m=+1584.819640675" lastFinishedPulling="2026-01-21 11:03:23.782578797 +0000 UTC m=+1588.243366395" observedRunningTime="2026-01-21 11:03:24.42505691 +0000 UTC m=+1588.885844508" watchObservedRunningTime="2026-01-21 11:03:24.432501401 +0000 UTC m=+1588.893288989" Jan 21 11:03:25 crc kubenswrapper[4745]: I0121 11:03:25.924999 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 21 11:03:26 crc kubenswrapper[4745]: I0121 11:03:26.491733 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 21 11:03:28 crc kubenswrapper[4745]: I0121 11:03:28.947145 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:28 crc kubenswrapper[4745]: I0121 11:03:28.947609 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:30 crc kubenswrapper[4745]: I0121 11:03:30.004135 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-5x4xz" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="registry-server" probeResult="failure" output=< Jan 21 11:03:30 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:03:30 crc kubenswrapper[4745]: > Jan 21 11:03:34 crc kubenswrapper[4745]: E0121 11:03:34.078603 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 21 11:03:34 crc kubenswrapper[4745]: E0121 11:03:34.079378 4745 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 21 11:03:34 crc kubenswrapper[4745]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 21 11:03:34 crc kubenswrapper[4745]: - hosts: all Jan 21 11:03:34 crc kubenswrapper[4745]: strategy: linear Jan 21 11:03:34 crc kubenswrapper[4745]: tasks: Jan 21 11:03:34 crc kubenswrapper[4745]: - name: Enable podified-repos Jan 21 11:03:34 crc kubenswrapper[4745]: become: true Jan 21 11:03:34 crc kubenswrapper[4745]: ansible.builtin.shell: | Jan 21 11:03:34 crc kubenswrapper[4745]: set -euxo pipefail Jan 21 11:03:34 crc kubenswrapper[4745]: pushd /var/tmp Jan 21 11:03:34 crc kubenswrapper[4745]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 21 11:03:34 crc kubenswrapper[4745]: pushd repo-setup-main Jan 21 11:03:34 crc kubenswrapper[4745]: python3 -m venv ./venv Jan 21 11:03:34 crc kubenswrapper[4745]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 21 11:03:34 crc kubenswrapper[4745]: ./venv/bin/repo-setup current-podified -b antelope Jan 21 11:03:34 crc kubenswrapper[4745]: popd Jan 21 11:03:34 crc kubenswrapper[4745]: rm -rf repo-setup-main Jan 21 11:03:34 crc kubenswrapper[4745]: Jan 21 11:03:34 crc kubenswrapper[4745]: Jan 21 11:03:34 crc kubenswrapper[4745]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 21 11:03:34 crc kubenswrapper[4745]: edpm_override_hosts: openstack-edpm-ipam Jan 21 11:03:34 crc kubenswrapper[4745]: edpm_service_type: repo-setup Jan 21 11:03:34 crc kubenswrapper[4745]: Jan 21 11:03:34 crc kubenswrapper[4745]: Jan 21 11:03:34 crc kubenswrapper[4745]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6tcrh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv_openstack(0ac74398-cfce-4a36-998c-057d617fe478): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 21 11:03:34 crc kubenswrapper[4745]: > logger="UnhandledError" Jan 21 11:03:34 crc kubenswrapper[4745]: E0121 11:03:34.080652 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" podUID="0ac74398-cfce-4a36-998c-057d617fe478" Jan 21 11:03:34 crc kubenswrapper[4745]: E0121 11:03:34.529474 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" podUID="0ac74398-cfce-4a36-998c-057d617fe478" Jan 21 11:03:39 crc kubenswrapper[4745]: I0121 11:03:39.024225 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:39 crc kubenswrapper[4745]: I0121 11:03:39.101706 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:39 crc kubenswrapper[4745]: I0121 11:03:39.273372 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:40 crc kubenswrapper[4745]: I0121 11:03:40.599765 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5x4xz" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="registry-server" containerID="cri-o://c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f" gracePeriod=2 Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.230797 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.330369 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbg2l\" (UniqueName: \"kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l\") pod \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.330531 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content\") pod \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.330692 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities\") pod \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\" (UID: \"c8083bf6-fbee-4316-b9ed-553d2e5745c0\") " Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.331349 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities" (OuterVolumeSpecName: "utilities") pod "c8083bf6-fbee-4316-b9ed-553d2e5745c0" (UID: "c8083bf6-fbee-4316-b9ed-553d2e5745c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.340126 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l" (OuterVolumeSpecName: "kube-api-access-zbg2l") pod "c8083bf6-fbee-4316-b9ed-553d2e5745c0" (UID: "c8083bf6-fbee-4316-b9ed-553d2e5745c0"). InnerVolumeSpecName "kube-api-access-zbg2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.376575 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8083bf6-fbee-4316-b9ed-553d2e5745c0" (UID: "c8083bf6-fbee-4316-b9ed-553d2e5745c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.433210 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbg2l\" (UniqueName: \"kubernetes.io/projected/c8083bf6-fbee-4316-b9ed-553d2e5745c0-kube-api-access-zbg2l\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.433254 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.433310 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8083bf6-fbee-4316-b9ed-553d2e5745c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.610303 4745 generic.go:334] "Generic (PLEG): container finished" podID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerID="c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f" exitCode=0 Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.610342 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerDied","Data":"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f"} Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.610369 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x4xz" event={"ID":"c8083bf6-fbee-4316-b9ed-553d2e5745c0","Type":"ContainerDied","Data":"98ae843788163d01672fcc7a9e836031861e1ee21b84395de73b7f8d191f711c"} Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.610372 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x4xz" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.610387 4745 scope.go:117] "RemoveContainer" containerID="c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.633377 4745 scope.go:117] "RemoveContainer" containerID="5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.654750 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.663604 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5x4xz"] Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.672247 4745 scope.go:117] "RemoveContainer" containerID="b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.709275 4745 scope.go:117] "RemoveContainer" containerID="c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f" Jan 21 11:03:41 crc kubenswrapper[4745]: E0121 11:03:41.709667 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f\": container with ID starting with c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f not found: ID does not exist" containerID="c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.709695 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f"} err="failed to get container status \"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f\": rpc error: code = NotFound desc = could not find container \"c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f\": container with ID starting with c9d725ebea8bfe5f80d961958416854a306211a3d57ae0752b307213fe83856f not found: ID does not exist" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.709720 4745 scope.go:117] "RemoveContainer" containerID="5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132" Jan 21 11:03:41 crc kubenswrapper[4745]: E0121 11:03:41.709939 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132\": container with ID starting with 5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132 not found: ID does not exist" containerID="5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.709963 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132"} err="failed to get container status \"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132\": rpc error: code = NotFound desc = could not find container \"5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132\": container with ID starting with 5e4a81928409461d734c4edd5e6c23e69cf64a1da43bade451599fb669b47132 not found: ID does not exist" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.709975 4745 scope.go:117] "RemoveContainer" containerID="b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff" Jan 21 11:03:41 crc kubenswrapper[4745]: E0121 11:03:41.710155 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff\": container with ID starting with b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff not found: ID does not exist" containerID="b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff" Jan 21 11:03:41 crc kubenswrapper[4745]: I0121 11:03:41.710174 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff"} err="failed to get container status \"b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff\": rpc error: code = NotFound desc = could not find container \"b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff\": container with ID starting with b5048f154c48a65627234e99c3400b260f35a4d52bec0b48375cd2a6fcce38ff not found: ID does not exist" Jan 21 11:03:42 crc kubenswrapper[4745]: I0121 11:03:42.011667 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" path="/var/lib/kubelet/pods/c8083bf6-fbee-4316-b9ed-553d2e5745c0/volumes" Jan 21 11:03:52 crc kubenswrapper[4745]: I0121 11:03:52.689684 4745 scope.go:117] "RemoveContainer" containerID="6ef26153aa55b357d813b014c35776fa0255781b742cd0ac4cd65328bcc16dd0" Jan 21 11:03:52 crc kubenswrapper[4745]: I0121 11:03:52.735107 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" event={"ID":"0ac74398-cfce-4a36-998c-057d617fe478","Type":"ContainerStarted","Data":"f5ef7523c72c69081158bfce153902606e8738b7e274dfa336aed7546fa609dd"} Jan 21 11:03:52 crc kubenswrapper[4745]: I0121 11:03:52.759662 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" podStartSLOduration=3.532443692 podStartE2EDuration="37.759641992s" podCreationTimestamp="2026-01-21 11:03:15 +0000 UTC" firstStartedPulling="2026-01-21 11:03:17.263948184 +0000 UTC m=+1581.724735782" lastFinishedPulling="2026-01-21 11:03:51.491146474 +0000 UTC m=+1615.951934082" observedRunningTime="2026-01-21 11:03:52.752956161 +0000 UTC m=+1617.213743769" watchObservedRunningTime="2026-01-21 11:03:52.759641992 +0000 UTC m=+1617.220429590" Jan 21 11:04:08 crc kubenswrapper[4745]: I0121 11:04:08.906143 4745 generic.go:334] "Generic (PLEG): container finished" podID="0ac74398-cfce-4a36-998c-057d617fe478" containerID="f5ef7523c72c69081158bfce153902606e8738b7e274dfa336aed7546fa609dd" exitCode=0 Jan 21 11:04:08 crc kubenswrapper[4745]: I0121 11:04:08.906341 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" event={"ID":"0ac74398-cfce-4a36-998c-057d617fe478","Type":"ContainerDied","Data":"f5ef7523c72c69081158bfce153902606e8738b7e274dfa336aed7546fa609dd"} Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.347622 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.448362 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tcrh\" (UniqueName: \"kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh\") pod \"0ac74398-cfce-4a36-998c-057d617fe478\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.448570 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam\") pod \"0ac74398-cfce-4a36-998c-057d617fe478\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.448687 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle\") pod \"0ac74398-cfce-4a36-998c-057d617fe478\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.448826 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory\") pod \"0ac74398-cfce-4a36-998c-057d617fe478\" (UID: \"0ac74398-cfce-4a36-998c-057d617fe478\") " Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.456176 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0ac74398-cfce-4a36-998c-057d617fe478" (UID: "0ac74398-cfce-4a36-998c-057d617fe478"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.456857 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh" (OuterVolumeSpecName: "kube-api-access-6tcrh") pod "0ac74398-cfce-4a36-998c-057d617fe478" (UID: "0ac74398-cfce-4a36-998c-057d617fe478"). InnerVolumeSpecName "kube-api-access-6tcrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.484981 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory" (OuterVolumeSpecName: "inventory") pod "0ac74398-cfce-4a36-998c-057d617fe478" (UID: "0ac74398-cfce-4a36-998c-057d617fe478"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.491488 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0ac74398-cfce-4a36-998c-057d617fe478" (UID: "0ac74398-cfce-4a36-998c-057d617fe478"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.551915 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.552092 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tcrh\" (UniqueName: \"kubernetes.io/projected/0ac74398-cfce-4a36-998c-057d617fe478-kube-api-access-6tcrh\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.552156 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.552169 4745 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ac74398-cfce-4a36-998c-057d617fe478-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.926987 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" event={"ID":"0ac74398-cfce-4a36-998c-057d617fe478","Type":"ContainerDied","Data":"d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059"} Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.927042 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0871e073ef9c0f3e51e7686bb484d3b6880d34e9ed9d472eff842439e101059" Jan 21 11:04:10 crc kubenswrapper[4745]: I0121 11:04:10.927043 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.037648 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64"] Jan 21 11:04:11 crc kubenswrapper[4745]: E0121 11:04:11.038056 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="extract-content" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038073 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="extract-content" Jan 21 11:04:11 crc kubenswrapper[4745]: E0121 11:04:11.038102 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="registry-server" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038108 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="registry-server" Jan 21 11:04:11 crc kubenswrapper[4745]: E0121 11:04:11.038127 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ac74398-cfce-4a36-998c-057d617fe478" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038134 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ac74398-cfce-4a36-998c-057d617fe478" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:11 crc kubenswrapper[4745]: E0121 11:04:11.038144 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="extract-utilities" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038150 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="extract-utilities" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038325 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8083bf6-fbee-4316-b9ed-553d2e5745c0" containerName="registry-server" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.038336 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ac74398-cfce-4a36-998c-057d617fe478" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.039715 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.044581 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.044608 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.044679 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.044719 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.049878 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64"] Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.167159 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.167454 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.167588 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf96q\" (UniqueName: \"kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.269734 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf96q\" (UniqueName: \"kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.269957 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.270023 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.276794 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.277068 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.288965 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf96q\" (UniqueName: \"kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-zbd64\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.372002 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.904849 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64"] Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.910395 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:04:11 crc kubenswrapper[4745]: I0121 11:04:11.940120 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" event={"ID":"743f1675-ea0d-4d4d-837b-82c6807bb12a","Type":"ContainerStarted","Data":"c779b094cf1e6e2158a1f67ee6000f511082fd433044a8a927a946db46d0026f"} Jan 21 11:04:12 crc kubenswrapper[4745]: I0121 11:04:12.953413 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" event={"ID":"743f1675-ea0d-4d4d-837b-82c6807bb12a","Type":"ContainerStarted","Data":"532952492c1c216ec462e26025eb716aae42c43bdf82f4337adf281dabe6cadf"} Jan 21 11:04:12 crc kubenswrapper[4745]: I0121 11:04:12.988172 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" podStartSLOduration=1.26831548 podStartE2EDuration="1.988145641s" podCreationTimestamp="2026-01-21 11:04:11 +0000 UTC" firstStartedPulling="2026-01-21 11:04:11.909065212 +0000 UTC m=+1636.369852810" lastFinishedPulling="2026-01-21 11:04:12.628895373 +0000 UTC m=+1637.089682971" observedRunningTime="2026-01-21 11:04:12.977285953 +0000 UTC m=+1637.438073561" watchObservedRunningTime="2026-01-21 11:04:12.988145641 +0000 UTC m=+1637.448933259" Jan 21 11:04:15 crc kubenswrapper[4745]: I0121 11:04:15.866465 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:04:15 crc kubenswrapper[4745]: I0121 11:04:15.867230 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:04:15 crc kubenswrapper[4745]: I0121 11:04:15.984369 4745 generic.go:334] "Generic (PLEG): container finished" podID="743f1675-ea0d-4d4d-837b-82c6807bb12a" containerID="532952492c1c216ec462e26025eb716aae42c43bdf82f4337adf281dabe6cadf" exitCode=0 Jan 21 11:04:15 crc kubenswrapper[4745]: I0121 11:04:15.984413 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" event={"ID":"743f1675-ea0d-4d4d-837b-82c6807bb12a","Type":"ContainerDied","Data":"532952492c1c216ec462e26025eb716aae42c43bdf82f4337adf281dabe6cadf"} Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.445483 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.527296 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf96q\" (UniqueName: \"kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q\") pod \"743f1675-ea0d-4d4d-837b-82c6807bb12a\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.527434 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam\") pod \"743f1675-ea0d-4d4d-837b-82c6807bb12a\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.528818 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory\") pod \"743f1675-ea0d-4d4d-837b-82c6807bb12a\" (UID: \"743f1675-ea0d-4d4d-837b-82c6807bb12a\") " Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.541395 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q" (OuterVolumeSpecName: "kube-api-access-sf96q") pod "743f1675-ea0d-4d4d-837b-82c6807bb12a" (UID: "743f1675-ea0d-4d4d-837b-82c6807bb12a"). InnerVolumeSpecName "kube-api-access-sf96q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.557315 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory" (OuterVolumeSpecName: "inventory") pod "743f1675-ea0d-4d4d-837b-82c6807bb12a" (UID: "743f1675-ea0d-4d4d-837b-82c6807bb12a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.562747 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "743f1675-ea0d-4d4d-837b-82c6807bb12a" (UID: "743f1675-ea0d-4d4d-837b-82c6807bb12a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.632283 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf96q\" (UniqueName: \"kubernetes.io/projected/743f1675-ea0d-4d4d-837b-82c6807bb12a-kube-api-access-sf96q\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.632338 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:17 crc kubenswrapper[4745]: I0121 11:04:17.632348 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/743f1675-ea0d-4d4d-837b-82c6807bb12a-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.011899 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.013612 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-zbd64" event={"ID":"743f1675-ea0d-4d4d-837b-82c6807bb12a","Type":"ContainerDied","Data":"c779b094cf1e6e2158a1f67ee6000f511082fd433044a8a927a946db46d0026f"} Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.013656 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c779b094cf1e6e2158a1f67ee6000f511082fd433044a8a927a946db46d0026f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.100084 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f"] Jan 21 11:04:18 crc kubenswrapper[4745]: E0121 11:04:18.100736 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743f1675-ea0d-4d4d-837b-82c6807bb12a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.100768 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="743f1675-ea0d-4d4d-837b-82c6807bb12a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.101090 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="743f1675-ea0d-4d4d-837b-82c6807bb12a" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.101965 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.107993 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.108050 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.108113 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.108399 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.118123 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f"] Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.153087 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.153201 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.153250 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzt9b\" (UniqueName: \"kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.153294 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.255452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.255580 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.255625 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzt9b\" (UniqueName: \"kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.255652 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.259663 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.262174 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.262943 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.274974 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzt9b\" (UniqueName: \"kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.446087 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:04:18 crc kubenswrapper[4745]: I0121 11:04:18.998329 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f"] Jan 21 11:04:19 crc kubenswrapper[4745]: I0121 11:04:19.028402 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" event={"ID":"98ae5b1b-1fcf-4dbd-aeab-e9c831863408","Type":"ContainerStarted","Data":"eff342383a7d04740aaf89705cf78cbcef368b21471a9ede0ba7230e735a5810"} Jan 21 11:04:20 crc kubenswrapper[4745]: I0121 11:04:20.047027 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" event={"ID":"98ae5b1b-1fcf-4dbd-aeab-e9c831863408","Type":"ContainerStarted","Data":"6acebcc493251f30177a1bfdd06859bc5649a316e002eadc45fee0524147c63e"} Jan 21 11:04:20 crc kubenswrapper[4745]: I0121 11:04:20.078802 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" podStartSLOduration=1.565132363 podStartE2EDuration="2.078778613s" podCreationTimestamp="2026-01-21 11:04:18 +0000 UTC" firstStartedPulling="2026-01-21 11:04:19.00747627 +0000 UTC m=+1643.468263868" lastFinishedPulling="2026-01-21 11:04:19.5211225 +0000 UTC m=+1643.981910118" observedRunningTime="2026-01-21 11:04:20.067276129 +0000 UTC m=+1644.528063727" watchObservedRunningTime="2026-01-21 11:04:20.078778613 +0000 UTC m=+1644.539566211" Jan 21 11:04:45 crc kubenswrapper[4745]: I0121 11:04:45.866575 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:04:45 crc kubenswrapper[4745]: I0121 11:04:45.867117 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.232647 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.235399 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.247520 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.319808 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.319993 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cpz\" (UniqueName: \"kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.320065 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.422042 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.422169 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cpz\" (UniqueName: \"kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.422235 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.422976 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.422998 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.453007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cpz\" (UniqueName: \"kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz\") pod \"community-operators-xs8bk\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:57 crc kubenswrapper[4745]: I0121 11:04:57.583145 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:04:58 crc kubenswrapper[4745]: I0121 11:04:58.071136 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:04:58 crc kubenswrapper[4745]: I0121 11:04:58.455023 4745 generic.go:334] "Generic (PLEG): container finished" podID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerID="1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f" exitCode=0 Jan 21 11:04:58 crc kubenswrapper[4745]: I0121 11:04:58.455120 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerDied","Data":"1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f"} Jan 21 11:04:58 crc kubenswrapper[4745]: I0121 11:04:58.455420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerStarted","Data":"0b58508bb30d24e34912cfe29d0c82d10acc8ef91f2ac045d2d014f23793ee7e"} Jan 21 11:04:59 crc kubenswrapper[4745]: I0121 11:04:59.531724 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerStarted","Data":"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142"} Jan 21 11:05:01 crc kubenswrapper[4745]: I0121 11:05:01.551498 4745 generic.go:334] "Generic (PLEG): container finished" podID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerID="92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142" exitCode=0 Jan 21 11:05:01 crc kubenswrapper[4745]: I0121 11:05:01.551584 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerDied","Data":"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142"} Jan 21 11:05:02 crc kubenswrapper[4745]: I0121 11:05:02.562469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerStarted","Data":"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a"} Jan 21 11:05:02 crc kubenswrapper[4745]: I0121 11:05:02.594249 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xs8bk" podStartSLOduration=2.029756697 podStartE2EDuration="5.594227034s" podCreationTimestamp="2026-01-21 11:04:57 +0000 UTC" firstStartedPulling="2026-01-21 11:04:58.457348353 +0000 UTC m=+1682.918135951" lastFinishedPulling="2026-01-21 11:05:02.02181869 +0000 UTC m=+1686.482606288" observedRunningTime="2026-01-21 11:05:02.590928947 +0000 UTC m=+1687.051716545" watchObservedRunningTime="2026-01-21 11:05:02.594227034 +0000 UTC m=+1687.055014642" Jan 21 11:05:07 crc kubenswrapper[4745]: I0121 11:05:07.583291 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:07 crc kubenswrapper[4745]: I0121 11:05:07.584180 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:07 crc kubenswrapper[4745]: I0121 11:05:07.645739 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:07 crc kubenswrapper[4745]: I0121 11:05:07.705300 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:07 crc kubenswrapper[4745]: I0121 11:05:07.892967 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:05:09 crc kubenswrapper[4745]: I0121 11:05:09.630833 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xs8bk" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="registry-server" containerID="cri-o://362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a" gracePeriod=2 Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.154972 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.195674 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9cpz\" (UniqueName: \"kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz\") pod \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.195817 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content\") pod \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.196126 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities\") pod \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\" (UID: \"051eac0f-ab59-4b64-ad4c-051ddb52e2fe\") " Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.196999 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities" (OuterVolumeSpecName: "utilities") pod "051eac0f-ab59-4b64-ad4c-051ddb52e2fe" (UID: "051eac0f-ab59-4b64-ad4c-051ddb52e2fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.201660 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz" (OuterVolumeSpecName: "kube-api-access-n9cpz") pod "051eac0f-ab59-4b64-ad4c-051ddb52e2fe" (UID: "051eac0f-ab59-4b64-ad4c-051ddb52e2fe"). InnerVolumeSpecName "kube-api-access-n9cpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.245232 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "051eac0f-ab59-4b64-ad4c-051ddb52e2fe" (UID: "051eac0f-ab59-4b64-ad4c-051ddb52e2fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.298793 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.298846 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9cpz\" (UniqueName: \"kubernetes.io/projected/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-kube-api-access-n9cpz\") on node \"crc\" DevicePath \"\"" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.298869 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/051eac0f-ab59-4b64-ad4c-051ddb52e2fe-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.641738 4745 generic.go:334] "Generic (PLEG): container finished" podID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerID="362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a" exitCode=0 Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.641787 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerDied","Data":"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a"} Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.641816 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xs8bk" event={"ID":"051eac0f-ab59-4b64-ad4c-051ddb52e2fe","Type":"ContainerDied","Data":"0b58508bb30d24e34912cfe29d0c82d10acc8ef91f2ac045d2d014f23793ee7e"} Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.641835 4745 scope.go:117] "RemoveContainer" containerID="362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.641835 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xs8bk" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.676940 4745 scope.go:117] "RemoveContainer" containerID="92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.682378 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.692128 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xs8bk"] Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.705445 4745 scope.go:117] "RemoveContainer" containerID="1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.761807 4745 scope.go:117] "RemoveContainer" containerID="362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a" Jan 21 11:05:10 crc kubenswrapper[4745]: E0121 11:05:10.762455 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a\": container with ID starting with 362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a not found: ID does not exist" containerID="362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.762486 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a"} err="failed to get container status \"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a\": rpc error: code = NotFound desc = could not find container \"362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a\": container with ID starting with 362418b6ca9da79d1196ee95210641bb59115dc71a9533f91deba56cc747c93a not found: ID does not exist" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.762510 4745 scope.go:117] "RemoveContainer" containerID="92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142" Jan 21 11:05:10 crc kubenswrapper[4745]: E0121 11:05:10.763039 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142\": container with ID starting with 92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142 not found: ID does not exist" containerID="92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.763067 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142"} err="failed to get container status \"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142\": rpc error: code = NotFound desc = could not find container \"92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142\": container with ID starting with 92248f26b15d1ebaba442a8c4b69e2495f2a3d9effac778de73849bf2e0aa142 not found: ID does not exist" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.763091 4745 scope.go:117] "RemoveContainer" containerID="1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f" Jan 21 11:05:10 crc kubenswrapper[4745]: E0121 11:05:10.763362 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f\": container with ID starting with 1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f not found: ID does not exist" containerID="1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f" Jan 21 11:05:10 crc kubenswrapper[4745]: I0121 11:05:10.763385 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f"} err="failed to get container status \"1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f\": rpc error: code = NotFound desc = could not find container \"1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f\": container with ID starting with 1d5073c5f2618c94b8d117024de48bbe0d50d3a650fac8b56343c4b166b9af7f not found: ID does not exist" Jan 21 11:05:12 crc kubenswrapper[4745]: I0121 11:05:12.014875 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" path="/var/lib/kubelet/pods/051eac0f-ab59-4b64-ad4c-051ddb52e2fe/volumes" Jan 21 11:05:15 crc kubenswrapper[4745]: I0121 11:05:15.866465 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:05:15 crc kubenswrapper[4745]: I0121 11:05:15.866835 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:05:15 crc kubenswrapper[4745]: I0121 11:05:15.866960 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:05:15 crc kubenswrapper[4745]: I0121 11:05:15.867860 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:05:15 crc kubenswrapper[4745]: I0121 11:05:15.867935 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" gracePeriod=600 Jan 21 11:05:15 crc kubenswrapper[4745]: E0121 11:05:15.995711 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:05:16 crc kubenswrapper[4745]: I0121 11:05:16.710153 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" exitCode=0 Jan 21 11:05:16 crc kubenswrapper[4745]: I0121 11:05:16.710195 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908"} Jan 21 11:05:16 crc kubenswrapper[4745]: I0121 11:05:16.710271 4745 scope.go:117] "RemoveContainer" containerID="de2d72e875ebdac4072b7484915db3fb7f2ddf3319a9637c3c9d5b967e4bccb7" Jan 21 11:05:16 crc kubenswrapper[4745]: I0121 11:05:16.711130 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:05:16 crc kubenswrapper[4745]: E0121 11:05:16.711489 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:05:32 crc kubenswrapper[4745]: I0121 11:05:32.000310 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:05:32 crc kubenswrapper[4745]: E0121 11:05:32.000949 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:05:46 crc kubenswrapper[4745]: I0121 11:05:46.007603 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:05:46 crc kubenswrapper[4745]: E0121 11:05:46.008350 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:05:52 crc kubenswrapper[4745]: I0121 11:05:52.835929 4745 scope.go:117] "RemoveContainer" containerID="e9dbd15b7055d340939a07062c7769bc063e9533659fc5f255ec4576988e4839" Jan 21 11:05:54 crc kubenswrapper[4745]: I0121 11:05:54.068918 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-6bqnp"] Jan 21 11:05:54 crc kubenswrapper[4745]: I0121 11:05:54.080253 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-6bqnp"] Jan 21 11:05:56 crc kubenswrapper[4745]: I0121 11:05:56.016889 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4773a81-6741-4319-8bb6-e4ec0badc52b" path="/var/lib/kubelet/pods/f4773a81-6741-4319-8bb6-e4ec0badc52b/volumes" Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.075947 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-pcgvc"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.093700 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-zdp4x"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.114028 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-4b85-account-create-update-m8v62"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.125370 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-4083-account-create-update-lc4t4"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.142597 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-0d34-account-create-update-lmqpb"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.152655 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-zdp4x"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.160833 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-4083-account-create-update-lc4t4"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.168938 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-4b85-account-create-update-m8v62"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.177152 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-pcgvc"] Jan 21 11:05:59 crc kubenswrapper[4745]: I0121 11:05:59.184592 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-0d34-account-create-update-lmqpb"] Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.001106 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:06:00 crc kubenswrapper[4745]: E0121 11:06:00.001851 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.013426 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f" path="/var/lib/kubelet/pods/258eaf35-4a5d-4afe-b1d1-ff6c4e1bed9f/volumes" Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.015088 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0203a2-37c2-4036-803d-3f2e86396cda" path="/var/lib/kubelet/pods/7c0203a2-37c2-4036-803d-3f2e86396cda/volumes" Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.016058 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7effc287-b786-4ed3-84a8-e7bc8ec693cb" path="/var/lib/kubelet/pods/7effc287-b786-4ed3-84a8-e7bc8ec693cb/volumes" Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.018190 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d1294c-ad74-4bbf-ab56-cfc7f9c9c213" path="/var/lib/kubelet/pods/88d1294c-ad74-4bbf-ab56-cfc7f9c9c213/volumes" Jan 21 11:06:00 crc kubenswrapper[4745]: I0121 11:06:00.020191 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf" path="/var/lib/kubelet/pods/cd98a0e3-2cd0-48f9-a24c-bd485fe3a3cf/volumes" Jan 21 11:06:11 crc kubenswrapper[4745]: I0121 11:06:11.001877 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:06:11 crc kubenswrapper[4745]: E0121 11:06:11.003360 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:06:11 crc kubenswrapper[4745]: I0121 11:06:11.061866 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gzsc2"] Jan 21 11:06:11 crc kubenswrapper[4745]: I0121 11:06:11.074634 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gzsc2"] Jan 21 11:06:12 crc kubenswrapper[4745]: I0121 11:06:12.016281 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d78adcaa-487f-4b09-879f-a5c680fee573" path="/var/lib/kubelet/pods/d78adcaa-487f-4b09-879f-a5c680fee573/volumes" Jan 21 11:06:23 crc kubenswrapper[4745]: I0121 11:06:23.000785 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:06:23 crc kubenswrapper[4745]: E0121 11:06:23.001672 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:06:36 crc kubenswrapper[4745]: I0121 11:06:36.007130 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:06:36 crc kubenswrapper[4745]: E0121 11:06:36.007903 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.054604 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-9mgl2"] Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.066808 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-b6qwz"] Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.091780 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-sm8n4"] Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.110502 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-9mgl2"] Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.123078 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-b6qwz"] Jan 21 11:06:43 crc kubenswrapper[4745]: I0121 11:06:43.136431 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-sm8n4"] Jan 21 11:06:44 crc kubenswrapper[4745]: I0121 11:06:44.012477 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d" path="/var/lib/kubelet/pods/09f9ff49-53aa-4ecb-8e5c-b4fd1c13c78d/volumes" Jan 21 11:06:44 crc kubenswrapper[4745]: I0121 11:06:44.014846 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae" path="/var/lib/kubelet/pods/1fc5fc9c-917c-42bb-b3b1-ca81cd63e6ae/volumes" Jan 21 11:06:44 crc kubenswrapper[4745]: I0121 11:06:44.018003 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="269d4758-9e42-46a9-9e75-b2fee912d2fd" path="/var/lib/kubelet/pods/269d4758-9e42-46a9-9e75-b2fee912d2fd/volumes" Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.070780 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-e410-account-create-update-sg8cc"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.091059 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-278d-account-create-update-2rxsx"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.100577 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-hvq49"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.108762 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cd91-account-create-update-jlzv9"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.117212 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2071-account-create-update-c226s"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.126073 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-hvq49"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.135788 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-278d-account-create-update-2rxsx"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.144574 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-e410-account-create-update-sg8cc"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.153923 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2071-account-create-update-c226s"] Jan 21 11:06:47 crc kubenswrapper[4745]: I0121 11:06:47.163263 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cd91-account-create-update-jlzv9"] Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.001161 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:06:48 crc kubenswrapper[4745]: E0121 11:06:48.001448 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.012483 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="076c482a-90e9-4db9-aa66-85e7d6a1ad3b" path="/var/lib/kubelet/pods/076c482a-90e9-4db9-aa66-85e7d6a1ad3b/volumes" Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.014895 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c2e56ea-b70a-4562-87ae-9811198d1c96" path="/var/lib/kubelet/pods/5c2e56ea-b70a-4562-87ae-9811198d1c96/volumes" Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.017007 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773f1b49-1207-44fe-ba15-ee0186030684" path="/var/lib/kubelet/pods/773f1b49-1207-44fe-ba15-ee0186030684/volumes" Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.019629 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95ab45a-aa5c-48af-8e3d-1a8900427471" path="/var/lib/kubelet/pods/e95ab45a-aa5c-48af-8e3d-1a8900427471/volumes" Jan 21 11:06:48 crc kubenswrapper[4745]: I0121 11:06:48.021087 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edaf1847-278a-4826-a868-c5923e1ea872" path="/var/lib/kubelet/pods/edaf1847-278a-4826-a868-c5923e1ea872/volumes" Jan 21 11:06:52 crc kubenswrapper[4745]: I0121 11:06:52.941432 4745 scope.go:117] "RemoveContainer" containerID="17d0dbc23e1967da164f116764ef5cf86553358448a1853862df54ca7a33e7ae" Jan 21 11:06:52 crc kubenswrapper[4745]: I0121 11:06:52.971832 4745 scope.go:117] "RemoveContainer" containerID="f08b6a5431fc0245ddefdda89248b758593c5c6049bb16ae0bf6e81d6e6c477c" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.010489 4745 scope.go:117] "RemoveContainer" containerID="ba0b931b3f33510b964ebce883ebe4922952aa41cb2ff7cf35aadf162cbe2700" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.049762 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-ccz6s"] Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.062509 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-ccz6s"] Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.083133 4745 scope.go:117] "RemoveContainer" containerID="27b72fc04f017bc615dd59a7ac7c06ef300a814a91330b019a6288e7ca6c3a27" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.113696 4745 scope.go:117] "RemoveContainer" containerID="5852c813b080230eaa54a31092b04998ea419d60cc3d066cc76cc02de66ef5ec" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.154482 4745 scope.go:117] "RemoveContainer" containerID="94aa73677b12dba86da7e8cd092f041cf7411430e64c50535b092071d637c803" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.201859 4745 scope.go:117] "RemoveContainer" containerID="b9bc99531dd008a5d455342d10983c39fc8446c55dc19e73dafed8b83d8f9b75" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.226744 4745 scope.go:117] "RemoveContainer" containerID="ff248a821e88d231f300bf25a3b8b77c3bead3d1458cbf6acc5c8dc443f44046" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.259489 4745 scope.go:117] "RemoveContainer" containerID="4794202b30214054696fc1d938aa058f1eca53c8a3be108c77d5ba8795f5a39f" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.290955 4745 scope.go:117] "RemoveContainer" containerID="8cef54c7b8ff35361805071da5cd62ade53b699a6f71d02461aa4d9e16c41cf1" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.310890 4745 scope.go:117] "RemoveContainer" containerID="1149680219c7a9e31f0d009865027fb51efb9d7992c985160ecc8b071b8fc5e6" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.337384 4745 scope.go:117] "RemoveContainer" containerID="c4bd5ca67543e5695924ea9805a43d6ffbc6e7ee22cd95b7b6558b9b4616c382" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.360280 4745 scope.go:117] "RemoveContainer" containerID="d94c34b48ac1863fcefb6ad33a4c0ca20dd6cf7b254b6f1fa90519aa07551d78" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.381540 4745 scope.go:117] "RemoveContainer" containerID="2b30ebf1f7a5f0cffab5b6c88ee980eef2ea8aa204c8f06b9a0cb911dce72d20" Jan 21 11:06:53 crc kubenswrapper[4745]: I0121 11:06:53.408864 4745 scope.go:117] "RemoveContainer" containerID="123244fabb89d9ac2d241710054e9e1ea4357e315ccd2d53fc74549cc26e462b" Jan 21 11:06:54 crc kubenswrapper[4745]: I0121 11:06:54.015298 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="319bfda0-51fb-4790-95eb-f1eed417deff" path="/var/lib/kubelet/pods/319bfda0-51fb-4790-95eb-f1eed417deff/volumes" Jan 21 11:06:57 crc kubenswrapper[4745]: I0121 11:06:57.042951 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-cr2xq"] Jan 21 11:06:57 crc kubenswrapper[4745]: I0121 11:06:57.055665 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-cr2xq"] Jan 21 11:06:58 crc kubenswrapper[4745]: I0121 11:06:58.014105 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="619fc0d2-35d7-4927-b904-5bf122e76d24" path="/var/lib/kubelet/pods/619fc0d2-35d7-4927-b904-5bf122e76d24/volumes" Jan 21 11:07:02 crc kubenswrapper[4745]: I0121 11:07:02.002589 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:07:02 crc kubenswrapper[4745]: E0121 11:07:02.003804 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:07:17 crc kubenswrapper[4745]: I0121 11:07:17.002286 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:07:17 crc kubenswrapper[4745]: E0121 11:07:17.003846 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:07:31 crc kubenswrapper[4745]: I0121 11:07:31.000312 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:07:31 crc kubenswrapper[4745]: E0121 11:07:31.001320 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:07:44 crc kubenswrapper[4745]: I0121 11:07:44.001132 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:07:44 crc kubenswrapper[4745]: E0121 11:07:44.002130 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:07:46 crc kubenswrapper[4745]: I0121 11:07:46.065476 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-pgh4g"] Jan 21 11:07:46 crc kubenswrapper[4745]: I0121 11:07:46.078099 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-pgh4g"] Jan 21 11:07:48 crc kubenswrapper[4745]: I0121 11:07:48.012963 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006a9d44-bc1a-41ce-8103-591327ca1afa" path="/var/lib/kubelet/pods/006a9d44-bc1a-41ce-8103-591327ca1afa/volumes" Jan 21 11:07:53 crc kubenswrapper[4745]: I0121 11:07:53.776285 4745 scope.go:117] "RemoveContainer" containerID="2dd9412639b60fb9a331be66f6006fe2b3cd7e5b5581fcaab636ab35b21078d6" Jan 21 11:07:53 crc kubenswrapper[4745]: I0121 11:07:53.823771 4745 scope.go:117] "RemoveContainer" containerID="b49bf2369716e44450e48493ed12bfa8b7e4216a4ceb1de2bdf1dd6a7dd11320" Jan 21 11:07:53 crc kubenswrapper[4745]: I0121 11:07:53.851302 4745 scope.go:117] "RemoveContainer" containerID="bc7e126930deceee5930e454d0bbcce31f72426de62cde678d37ff82abe2e933" Jan 21 11:07:57 crc kubenswrapper[4745]: I0121 11:07:57.001364 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:07:57 crc kubenswrapper[4745]: E0121 11:07:57.002657 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:08:02 crc kubenswrapper[4745]: I0121 11:08:02.055290 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hj5fq"] Jan 21 11:08:02 crc kubenswrapper[4745]: I0121 11:08:02.074095 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hj5fq"] Jan 21 11:08:03 crc kubenswrapper[4745]: I0121 11:08:03.029083 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-45lw5"] Jan 21 11:08:03 crc kubenswrapper[4745]: I0121 11:08:03.039905 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-45lw5"] Jan 21 11:08:04 crc kubenswrapper[4745]: I0121 11:08:04.019827 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444abf7d-45e7-490e-a1af-5a082b51a3af" path="/var/lib/kubelet/pods/444abf7d-45e7-490e-a1af-5a082b51a3af/volumes" Jan 21 11:08:04 crc kubenswrapper[4745]: I0121 11:08:04.023663 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0086c8-abfc-4740-9d81-62eab45e6507" path="/var/lib/kubelet/pods/be0086c8-abfc-4740-9d81-62eab45e6507/volumes" Jan 21 11:08:08 crc kubenswrapper[4745]: I0121 11:08:08.000657 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:08:08 crc kubenswrapper[4745]: E0121 11:08:08.001647 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:08:13 crc kubenswrapper[4745]: I0121 11:08:13.063524 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-tsql6"] Jan 21 11:08:13 crc kubenswrapper[4745]: I0121 11:08:13.077771 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-rrpzk"] Jan 21 11:08:13 crc kubenswrapper[4745]: I0121 11:08:13.094398 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-tsql6"] Jan 21 11:08:13 crc kubenswrapper[4745]: I0121 11:08:13.103931 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-rrpzk"] Jan 21 11:08:14 crc kubenswrapper[4745]: I0121 11:08:14.038876 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="267909cf-90b8-451d-9882-715e44dc2c30" path="/var/lib/kubelet/pods/267909cf-90b8-451d-9882-715e44dc2c30/volumes" Jan 21 11:08:14 crc kubenswrapper[4745]: I0121 11:08:14.041114 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="939e01d6-c378-485e-bd8c-8d394151ef3b" path="/var/lib/kubelet/pods/939e01d6-c378-485e-bd8c-8d394151ef3b/volumes" Jan 21 11:08:14 crc kubenswrapper[4745]: I0121 11:08:14.504402 4745 generic.go:334] "Generic (PLEG): container finished" podID="98ae5b1b-1fcf-4dbd-aeab-e9c831863408" containerID="6acebcc493251f30177a1bfdd06859bc5649a316e002eadc45fee0524147c63e" exitCode=0 Jan 21 11:08:14 crc kubenswrapper[4745]: I0121 11:08:14.504456 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" event={"ID":"98ae5b1b-1fcf-4dbd-aeab-e9c831863408","Type":"ContainerDied","Data":"6acebcc493251f30177a1bfdd06859bc5649a316e002eadc45fee0524147c63e"} Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.033275 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-6x5s4"] Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.042783 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-6x5s4"] Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.954806 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.965998 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle\") pod \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.966154 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam\") pod \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.966181 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzt9b\" (UniqueName: \"kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b\") pod \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " Jan 21 11:08:15 crc kubenswrapper[4745]: I0121 11:08:15.966206 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory\") pod \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\" (UID: \"98ae5b1b-1fcf-4dbd-aeab-e9c831863408\") " Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.020815 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "98ae5b1b-1fcf-4dbd-aeab-e9c831863408" (UID: "98ae5b1b-1fcf-4dbd-aeab-e9c831863408"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.020902 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b" (OuterVolumeSpecName: "kube-api-access-zzt9b") pod "98ae5b1b-1fcf-4dbd-aeab-e9c831863408" (UID: "98ae5b1b-1fcf-4dbd-aeab-e9c831863408"). InnerVolumeSpecName "kube-api-access-zzt9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.034439 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ac43469-c72e-486a-80bf-f6de6bdfa199" path="/var/lib/kubelet/pods/9ac43469-c72e-486a-80bf-f6de6bdfa199/volumes" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.046335 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory" (OuterVolumeSpecName: "inventory") pod "98ae5b1b-1fcf-4dbd-aeab-e9c831863408" (UID: "98ae5b1b-1fcf-4dbd-aeab-e9c831863408"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.062370 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "98ae5b1b-1fcf-4dbd-aeab-e9c831863408" (UID: "98ae5b1b-1fcf-4dbd-aeab-e9c831863408"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.069734 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.069780 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzt9b\" (UniqueName: \"kubernetes.io/projected/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-kube-api-access-zzt9b\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.069795 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.069808 4745 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98ae5b1b-1fcf-4dbd-aeab-e9c831863408-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.528671 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" event={"ID":"98ae5b1b-1fcf-4dbd-aeab-e9c831863408","Type":"ContainerDied","Data":"eff342383a7d04740aaf89705cf78cbcef368b21471a9ede0ba7230e735a5810"} Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.528722 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eff342383a7d04740aaf89705cf78cbcef368b21471a9ede0ba7230e735a5810" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.528724 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683007 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n"] Jan 21 11:08:16 crc kubenswrapper[4745]: E0121 11:08:16.683356 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="registry-server" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683373 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="registry-server" Jan 21 11:08:16 crc kubenswrapper[4745]: E0121 11:08:16.683404 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98ae5b1b-1fcf-4dbd-aeab-e9c831863408" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683411 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="98ae5b1b-1fcf-4dbd-aeab-e9c831863408" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:08:16 crc kubenswrapper[4745]: E0121 11:08:16.683429 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="extract-utilities" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683435 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="extract-utilities" Jan 21 11:08:16 crc kubenswrapper[4745]: E0121 11:08:16.683442 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="extract-content" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683447 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="extract-content" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683634 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="98ae5b1b-1fcf-4dbd-aeab-e9c831863408" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.683648 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="051eac0f-ab59-4b64-ad4c-051ddb52e2fe" containerName="registry-server" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.691970 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.695644 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n"] Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.697216 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.697218 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.702081 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.702802 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.892605 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.892689 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7psd\" (UniqueName: \"kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.892798 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.995102 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.995519 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:16 crc kubenswrapper[4745]: I0121 11:08:16.995633 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7psd\" (UniqueName: \"kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:17 crc kubenswrapper[4745]: I0121 11:08:17.002723 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:17 crc kubenswrapper[4745]: I0121 11:08:17.008429 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:17 crc kubenswrapper[4745]: I0121 11:08:17.033286 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7psd\" (UniqueName: \"kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:17 crc kubenswrapper[4745]: I0121 11:08:17.310640 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:08:17 crc kubenswrapper[4745]: I0121 11:08:17.879817 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n"] Jan 21 11:08:18 crc kubenswrapper[4745]: I0121 11:08:18.551062 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" event={"ID":"1b0d7ba0-2c25-43bf-8762-013c96431756","Type":"ContainerStarted","Data":"1484a561693b6b253df8c6410c5a6cf354f5b955272c301d1be1cabdccc0896d"} Jan 21 11:08:18 crc kubenswrapper[4745]: I0121 11:08:18.551387 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" event={"ID":"1b0d7ba0-2c25-43bf-8762-013c96431756","Type":"ContainerStarted","Data":"213faf2e085141863a85d4307f01b5ee2821d09f0d92bf93b75660e1383bcb97"} Jan 21 11:08:18 crc kubenswrapper[4745]: I0121 11:08:18.584410 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" podStartSLOduration=2.170465828 podStartE2EDuration="2.584386153s" podCreationTimestamp="2026-01-21 11:08:16 +0000 UTC" firstStartedPulling="2026-01-21 11:08:17.886445011 +0000 UTC m=+1882.347232609" lastFinishedPulling="2026-01-21 11:08:18.300365336 +0000 UTC m=+1882.761152934" observedRunningTime="2026-01-21 11:08:18.572135125 +0000 UTC m=+1883.032922743" watchObservedRunningTime="2026-01-21 11:08:18.584386153 +0000 UTC m=+1883.045173761" Jan 21 11:08:20 crc kubenswrapper[4745]: I0121 11:08:20.001046 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:08:20 crc kubenswrapper[4745]: E0121 11:08:20.001672 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:08:31 crc kubenswrapper[4745]: I0121 11:08:31.000391 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:08:31 crc kubenswrapper[4745]: E0121 11:08:31.001032 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:08:43 crc kubenswrapper[4745]: I0121 11:08:43.000812 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:08:43 crc kubenswrapper[4745]: E0121 11:08:43.001966 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:08:53 crc kubenswrapper[4745]: I0121 11:08:53.953066 4745 scope.go:117] "RemoveContainer" containerID="db6a851847d39f560fd4a3b35de6cbab2e8a942e537c0044e09db3f7cef847ad" Jan 21 11:08:53 crc kubenswrapper[4745]: I0121 11:08:53.984479 4745 scope.go:117] "RemoveContainer" containerID="4b89ace2acb3c934a500372662802c3e8ce2acc932ae30ce38cc1d3595500f20" Jan 21 11:08:54 crc kubenswrapper[4745]: I0121 11:08:54.066436 4745 scope.go:117] "RemoveContainer" containerID="713a4c5f522bb4cc43bac1cd27f219771ed3c9e6af9220bf56d67d54c691a618" Jan 21 11:08:54 crc kubenswrapper[4745]: I0121 11:08:54.100961 4745 scope.go:117] "RemoveContainer" containerID="68c5cff5d9b6b515a71000aefd4a7bc7875a3525a7c8e2d6c70c406c3598993e" Jan 21 11:08:54 crc kubenswrapper[4745]: I0121 11:08:54.157938 4745 scope.go:117] "RemoveContainer" containerID="108ce0cebeefd813918deebb94fde732e4663dab8560edf6a5b7c39d0f458ec8" Jan 21 11:08:56 crc kubenswrapper[4745]: I0121 11:08:56.006275 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:08:56 crc kubenswrapper[4745]: E0121 11:08:56.007221 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:09:10 crc kubenswrapper[4745]: I0121 11:09:10.000718 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:09:10 crc kubenswrapper[4745]: E0121 11:09:10.001661 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:09:21 crc kubenswrapper[4745]: I0121 11:09:21.000319 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:09:21 crc kubenswrapper[4745]: E0121 11:09:21.001025 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:09:35 crc kubenswrapper[4745]: I0121 11:09:35.000202 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:09:35 crc kubenswrapper[4745]: E0121 11:09:35.001077 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:09:48 crc kubenswrapper[4745]: I0121 11:09:48.000918 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:09:48 crc kubenswrapper[4745]: E0121 11:09:48.001605 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:09:49 crc kubenswrapper[4745]: I0121 11:09:49.093547 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-46df-account-create-update-ckz9b"] Jan 21 11:09:49 crc kubenswrapper[4745]: I0121 11:09:49.120029 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-46df-account-create-update-ckz9b"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.013857 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4008d5c8-f775-45b9-bffc-fcbbd41768ba" path="/var/lib/kubelet/pods/4008d5c8-f775-45b9-bffc-fcbbd41768ba/volumes" Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.052597 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6hd2t"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.066586 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-kqm6s"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.091596 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6hd2t"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.103323 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-d3be-account-create-update-b5s94"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.112295 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-ea70-account-create-update-9hjhh"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.121263 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-kqm6s"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.129031 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-d3be-account-create-update-b5s94"] Jan 21 11:09:50 crc kubenswrapper[4745]: I0121 11:09:50.136260 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-ea70-account-create-update-9hjhh"] Jan 21 11:09:51 crc kubenswrapper[4745]: I0121 11:09:51.043881 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-zdc2f"] Jan 21 11:09:51 crc kubenswrapper[4745]: I0121 11:09:51.055087 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-zdc2f"] Jan 21 11:09:52 crc kubenswrapper[4745]: I0121 11:09:52.013286 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c439a3b-429b-45f7-be39-a4fcbcf904b8" path="/var/lib/kubelet/pods/0c439a3b-429b-45f7-be39-a4fcbcf904b8/volumes" Jan 21 11:09:52 crc kubenswrapper[4745]: I0121 11:09:52.015361 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25000567-9488-4bd2-8b57-a2b4b1f41366" path="/var/lib/kubelet/pods/25000567-9488-4bd2-8b57-a2b4b1f41366/volumes" Jan 21 11:09:52 crc kubenswrapper[4745]: I0121 11:09:52.016205 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eaf1233-ea59-4baf-ab46-f24a0b142b80" path="/var/lib/kubelet/pods/7eaf1233-ea59-4baf-ab46-f24a0b142b80/volumes" Jan 21 11:09:52 crc kubenswrapper[4745]: I0121 11:09:52.017069 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="867be566-a37c-499e-9d6b-026bbc370fe5" path="/var/lib/kubelet/pods/867be566-a37c-499e-9d6b-026bbc370fe5/volumes" Jan 21 11:09:52 crc kubenswrapper[4745]: I0121 11:09:52.018437 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c096e9f7-6065-4656-82c3-167bd595c303" path="/var/lib/kubelet/pods/c096e9f7-6065-4656-82c3-167bd595c303/volumes" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.315979 4745 scope.go:117] "RemoveContainer" containerID="e82ed3c3961cdea0b37f759d00cb79a10c4370c25f360fec1991f8ed3ff84fa6" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.345792 4745 scope.go:117] "RemoveContainer" containerID="46e3d1395eabb7cca6c8a7c2b76bc9fdbc5806b059d8b8e959b93482eee75116" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.387952 4745 scope.go:117] "RemoveContainer" containerID="e460acd1f01f3d93f03a584dadc6ecf18d2e36b6e0ad643f20d58b0b836cdab0" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.441449 4745 scope.go:117] "RemoveContainer" containerID="7dfa3637b16cbe2749f299aa03af13f78f46d746c3d19c167f62d3973b8553ec" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.486325 4745 scope.go:117] "RemoveContainer" containerID="22a5791efd23720cc761079399543e5686cf800f65231d2edb1e3221d13f2a53" Jan 21 11:09:54 crc kubenswrapper[4745]: I0121 11:09:54.526167 4745 scope.go:117] "RemoveContainer" containerID="43d82d5a3110e11893aa1467f6d3aa403e213bc5a83a286643fbf64cf8b0853d" Jan 21 11:10:01 crc kubenswrapper[4745]: I0121 11:10:01.000491 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:10:01 crc kubenswrapper[4745]: E0121 11:10:01.001773 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:10:14 crc kubenswrapper[4745]: I0121 11:10:14.001076 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:10:14 crc kubenswrapper[4745]: E0121 11:10:14.001914 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:10:29 crc kubenswrapper[4745]: I0121 11:10:29.000769 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:10:29 crc kubenswrapper[4745]: I0121 11:10:29.041564 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhjnr"] Jan 21 11:10:29 crc kubenswrapper[4745]: I0121 11:10:29.051862 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-nhjnr"] Jan 21 11:10:29 crc kubenswrapper[4745]: I0121 11:10:29.798785 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a"} Jan 21 11:10:30 crc kubenswrapper[4745]: I0121 11:10:30.015101 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d39082c-f9aa-4e16-a704-487ab278344c" path="/var/lib/kubelet/pods/6d39082c-f9aa-4e16-a704-487ab278344c/volumes" Jan 21 11:10:37 crc kubenswrapper[4745]: I0121 11:10:37.946877 4745 generic.go:334] "Generic (PLEG): container finished" podID="1b0d7ba0-2c25-43bf-8762-013c96431756" containerID="1484a561693b6b253df8c6410c5a6cf354f5b955272c301d1be1cabdccc0896d" exitCode=0 Jan 21 11:10:37 crc kubenswrapper[4745]: I0121 11:10:37.946999 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" event={"ID":"1b0d7ba0-2c25-43bf-8762-013c96431756","Type":"ContainerDied","Data":"1484a561693b6b253df8c6410c5a6cf354f5b955272c301d1be1cabdccc0896d"} Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.434639 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.560680 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory\") pod \"1b0d7ba0-2c25-43bf-8762-013c96431756\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.561169 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam\") pod \"1b0d7ba0-2c25-43bf-8762-013c96431756\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.561563 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7psd\" (UniqueName: \"kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd\") pod \"1b0d7ba0-2c25-43bf-8762-013c96431756\" (UID: \"1b0d7ba0-2c25-43bf-8762-013c96431756\") " Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.581637 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd" (OuterVolumeSpecName: "kube-api-access-q7psd") pod "1b0d7ba0-2c25-43bf-8762-013c96431756" (UID: "1b0d7ba0-2c25-43bf-8762-013c96431756"). InnerVolumeSpecName "kube-api-access-q7psd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.603729 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory" (OuterVolumeSpecName: "inventory") pod "1b0d7ba0-2c25-43bf-8762-013c96431756" (UID: "1b0d7ba0-2c25-43bf-8762-013c96431756"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.652443 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1b0d7ba0-2c25-43bf-8762-013c96431756" (UID: "1b0d7ba0-2c25-43bf-8762-013c96431756"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.665323 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7psd\" (UniqueName: \"kubernetes.io/projected/1b0d7ba0-2c25-43bf-8762-013c96431756-kube-api-access-q7psd\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.665363 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.665373 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1b0d7ba0-2c25-43bf-8762-013c96431756-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.966423 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" event={"ID":"1b0d7ba0-2c25-43bf-8762-013c96431756","Type":"ContainerDied","Data":"213faf2e085141863a85d4307f01b5ee2821d09f0d92bf93b75660e1383bcb97"} Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.966817 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="213faf2e085141863a85d4307f01b5ee2821d09f0d92bf93b75660e1383bcb97" Jan 21 11:10:39 crc kubenswrapper[4745]: I0121 11:10:39.966548 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.080369 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n"] Jan 21 11:10:40 crc kubenswrapper[4745]: E0121 11:10:40.081004 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b0d7ba0-2c25-43bf-8762-013c96431756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.081031 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b0d7ba0-2c25-43bf-8762-013c96431756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.081325 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b0d7ba0-2c25-43bf-8762-013c96431756" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.082330 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.085218 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.085523 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.086798 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.093105 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.098617 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n"] Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.177023 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.177176 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.177220 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vktcs\" (UniqueName: \"kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.279315 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.279442 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.279506 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vktcs\" (UniqueName: \"kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.285008 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.285004 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.300283 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vktcs\" (UniqueName: \"kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:40 crc kubenswrapper[4745]: I0121 11:10:40.403915 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:10:41 crc kubenswrapper[4745]: I0121 11:10:41.193044 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n"] Jan 21 11:10:41 crc kubenswrapper[4745]: I0121 11:10:41.212213 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:10:41 crc kubenswrapper[4745]: I0121 11:10:41.986275 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" event={"ID":"7554614d-4696-4385-84d1-9dd2236effef","Type":"ContainerStarted","Data":"29f31fc36eddb6c3be59bdf72bd027892e78268cbccbeaec7278330f0cae01c9"} Jan 21 11:10:42 crc kubenswrapper[4745]: I0121 11:10:42.996365 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" event={"ID":"7554614d-4696-4385-84d1-9dd2236effef","Type":"ContainerStarted","Data":"080a68e2d9d174d871c1c7c89a8acba8669b53b07d8b1cc91831115d662f9f5f"} Jan 21 11:10:43 crc kubenswrapper[4745]: I0121 11:10:43.025595 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" podStartSLOduration=2.121281749 podStartE2EDuration="3.02556563s" podCreationTimestamp="2026-01-21 11:10:40 +0000 UTC" firstStartedPulling="2026-01-21 11:10:41.21189636 +0000 UTC m=+2025.672683958" lastFinishedPulling="2026-01-21 11:10:42.116180221 +0000 UTC m=+2026.576967839" observedRunningTime="2026-01-21 11:10:43.017241844 +0000 UTC m=+2027.478029432" watchObservedRunningTime="2026-01-21 11:10:43.02556563 +0000 UTC m=+2027.486353228" Jan 21 11:10:54 crc kubenswrapper[4745]: I0121 11:10:54.683945 4745 scope.go:117] "RemoveContainer" containerID="39d186d40a9c15581d1b984e888206a008f06219ad9df9c09e5e0dee19a2a4f1" Jan 21 11:10:59 crc kubenswrapper[4745]: I0121 11:10:59.079220 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-dnktc"] Jan 21 11:10:59 crc kubenswrapper[4745]: I0121 11:10:59.089407 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-dnktc"] Jan 21 11:11:00 crc kubenswrapper[4745]: I0121 11:11:00.013890 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d06e6bd-564b-441c-8672-3c170053407d" path="/var/lib/kubelet/pods/1d06e6bd-564b-441c-8672-3c170053407d/volumes" Jan 21 11:11:01 crc kubenswrapper[4745]: I0121 11:11:01.030659 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrtlg"] Jan 21 11:11:01 crc kubenswrapper[4745]: I0121 11:11:01.042243 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nrtlg"] Jan 21 11:11:02 crc kubenswrapper[4745]: I0121 11:11:02.013956 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c" path="/var/lib/kubelet/pods/d3e9b2ce-f1d2-4dce-aaa4-5fdf622f445c/volumes" Jan 21 11:11:46 crc kubenswrapper[4745]: I0121 11:11:46.045650 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-42nll"] Jan 21 11:11:46 crc kubenswrapper[4745]: I0121 11:11:46.055580 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-42nll"] Jan 21 11:11:48 crc kubenswrapper[4745]: I0121 11:11:48.011257 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e444136-6476-4a25-b073-4f5e276fe173" path="/var/lib/kubelet/pods/7e444136-6476-4a25-b073-4f5e276fe173/volumes" Jan 21 11:11:54 crc kubenswrapper[4745]: I0121 11:11:54.750844 4745 scope.go:117] "RemoveContainer" containerID="59fa18fd04441fe640b53db2e68d59f99997ebaf8671b75549fdec50606a545b" Jan 21 11:11:54 crc kubenswrapper[4745]: I0121 11:11:54.807212 4745 scope.go:117] "RemoveContainer" containerID="f5ac36da53d52a33cac774a3f877b28b621528750d100a68c55e9ede4f809ba4" Jan 21 11:11:54 crc kubenswrapper[4745]: I0121 11:11:54.840945 4745 scope.go:117] "RemoveContainer" containerID="1e64c8b49c4937475f2f4a6885064c0c5678a1800a000cab12c53d4ba828cf95" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.682754 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.685561 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.697076 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.712335 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n86\" (UniqueName: \"kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.712721 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.712949 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.814236 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.814375 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.814441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7n86\" (UniqueName: \"kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.815168 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.815497 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:58 crc kubenswrapper[4745]: I0121 11:11:58.839294 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7n86\" (UniqueName: \"kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86\") pod \"redhat-operators-ctsc9\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:59 crc kubenswrapper[4745]: I0121 11:11:59.011278 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:11:59 crc kubenswrapper[4745]: I0121 11:11:59.547382 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:11:59 crc kubenswrapper[4745]: I0121 11:11:59.676314 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerStarted","Data":"a6cd8b9ec6f06f26c5e8c57493c10c3e53cb467cd14b05c0a1fbc66b06a70c42"} Jan 21 11:12:00 crc kubenswrapper[4745]: I0121 11:12:00.691737 4745 generic.go:334] "Generic (PLEG): container finished" podID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerID="4ea6a191ac130174dae28f7dbba3ce394d2b8a7a50502a3a995080f25dd6a6bd" exitCode=0 Jan 21 11:12:00 crc kubenswrapper[4745]: I0121 11:12:00.691804 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerDied","Data":"4ea6a191ac130174dae28f7dbba3ce394d2b8a7a50502a3a995080f25dd6a6bd"} Jan 21 11:12:02 crc kubenswrapper[4745]: I0121 11:12:02.711501 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerStarted","Data":"bc45c99679746445d7048f63f815cd547933bdf9606d1c1f025a0e37eecddace"} Jan 21 11:12:09 crc kubenswrapper[4745]: I0121 11:12:09.773965 4745 generic.go:334] "Generic (PLEG): container finished" podID="7554614d-4696-4385-84d1-9dd2236effef" containerID="080a68e2d9d174d871c1c7c89a8acba8669b53b07d8b1cc91831115d662f9f5f" exitCode=0 Jan 21 11:12:09 crc kubenswrapper[4745]: I0121 11:12:09.774569 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" event={"ID":"7554614d-4696-4385-84d1-9dd2236effef","Type":"ContainerDied","Data":"080a68e2d9d174d871c1c7c89a8acba8669b53b07d8b1cc91831115d662f9f5f"} Jan 21 11:12:10 crc kubenswrapper[4745]: I0121 11:12:10.786276 4745 generic.go:334] "Generic (PLEG): container finished" podID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerID="bc45c99679746445d7048f63f815cd547933bdf9606d1c1f025a0e37eecddace" exitCode=0 Jan 21 11:12:10 crc kubenswrapper[4745]: I0121 11:12:10.786374 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerDied","Data":"bc45c99679746445d7048f63f815cd547933bdf9606d1c1f025a0e37eecddace"} Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.389785 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.592372 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vktcs\" (UniqueName: \"kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs\") pod \"7554614d-4696-4385-84d1-9dd2236effef\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.592506 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory\") pod \"7554614d-4696-4385-84d1-9dd2236effef\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.593632 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam\") pod \"7554614d-4696-4385-84d1-9dd2236effef\" (UID: \"7554614d-4696-4385-84d1-9dd2236effef\") " Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.618023 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs" (OuterVolumeSpecName: "kube-api-access-vktcs") pod "7554614d-4696-4385-84d1-9dd2236effef" (UID: "7554614d-4696-4385-84d1-9dd2236effef"). InnerVolumeSpecName "kube-api-access-vktcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.664784 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory" (OuterVolumeSpecName: "inventory") pod "7554614d-4696-4385-84d1-9dd2236effef" (UID: "7554614d-4696-4385-84d1-9dd2236effef"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.689919 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7554614d-4696-4385-84d1-9dd2236effef" (UID: "7554614d-4696-4385-84d1-9dd2236effef"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.699750 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.699782 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vktcs\" (UniqueName: \"kubernetes.io/projected/7554614d-4696-4385-84d1-9dd2236effef-kube-api-access-vktcs\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.699792 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7554614d-4696-4385-84d1-9dd2236effef-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.836797 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" event={"ID":"7554614d-4696-4385-84d1-9dd2236effef","Type":"ContainerDied","Data":"29f31fc36eddb6c3be59bdf72bd027892e78268cbccbeaec7278330f0cae01c9"} Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.837283 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f31fc36eddb6c3be59bdf72bd027892e78268cbccbeaec7278330f0cae01c9" Jan 21 11:12:11 crc kubenswrapper[4745]: I0121 11:12:11.837355 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.034206 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw"] Jan 21 11:12:12 crc kubenswrapper[4745]: E0121 11:12:12.038931 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7554614d-4696-4385-84d1-9dd2236effef" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.038971 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7554614d-4696-4385-84d1-9dd2236effef" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.039291 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7554614d-4696-4385-84d1-9dd2236effef" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.040159 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.064377 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.064710 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.065018 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.065242 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.071046 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw"] Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.119257 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tblgz\" (UniqueName: \"kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.119459 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.119518 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.221546 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tblgz\" (UniqueName: \"kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.221915 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.222083 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.228006 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.229090 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.242653 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tblgz\" (UniqueName: \"kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-652rw\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.360591 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.847688 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerStarted","Data":"5c43c3ed9dbb2f1afa0d497bcda2b2ece1acbf500fe7a067bf7717c2456b1e46"} Jan 21 11:12:12 crc kubenswrapper[4745]: I0121 11:12:12.870649 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ctsc9" podStartSLOduration=4.068109241 podStartE2EDuration="14.870629333s" podCreationTimestamp="2026-01-21 11:11:58 +0000 UTC" firstStartedPulling="2026-01-21 11:12:00.69386463 +0000 UTC m=+2105.154652238" lastFinishedPulling="2026-01-21 11:12:11.496384732 +0000 UTC m=+2115.957172330" observedRunningTime="2026-01-21 11:12:12.867734364 +0000 UTC m=+2117.328521962" watchObservedRunningTime="2026-01-21 11:12:12.870629333 +0000 UTC m=+2117.331416931" Jan 21 11:12:13 crc kubenswrapper[4745]: I0121 11:12:13.009565 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw"] Jan 21 11:12:13 crc kubenswrapper[4745]: I0121 11:12:13.857713 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" event={"ID":"e125e636-26d3-49e3-9e06-2c1a3cd106c9","Type":"ContainerStarted","Data":"9dbb03877cb71685449042db7c228d56e5450de062f67ce7952882cc491c3077"} Jan 21 11:12:14 crc kubenswrapper[4745]: I0121 11:12:14.867457 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" event={"ID":"e125e636-26d3-49e3-9e06-2c1a3cd106c9","Type":"ContainerStarted","Data":"1739e914812a4826140d56050483e716f53c43442db2e60afacd8dd7aac77412"} Jan 21 11:12:15 crc kubenswrapper[4745]: I0121 11:12:15.917235 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" podStartSLOduration=3.6567321550000003 podStartE2EDuration="4.917217847s" podCreationTimestamp="2026-01-21 11:12:11 +0000 UTC" firstStartedPulling="2026-01-21 11:12:13.024460169 +0000 UTC m=+2117.485247767" lastFinishedPulling="2026-01-21 11:12:14.284945861 +0000 UTC m=+2118.745733459" observedRunningTime="2026-01-21 11:12:15.90740775 +0000 UTC m=+2120.368195358" watchObservedRunningTime="2026-01-21 11:12:15.917217847 +0000 UTC m=+2120.378005445" Jan 21 11:12:19 crc kubenswrapper[4745]: I0121 11:12:19.012184 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:19 crc kubenswrapper[4745]: I0121 11:12:19.012502 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:20 crc kubenswrapper[4745]: I0121 11:12:20.070488 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ctsc9" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" probeResult="failure" output=< Jan 21 11:12:20 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:12:20 crc kubenswrapper[4745]: > Jan 21 11:12:21 crc kubenswrapper[4745]: I0121 11:12:21.934703 4745 generic.go:334] "Generic (PLEG): container finished" podID="e125e636-26d3-49e3-9e06-2c1a3cd106c9" containerID="1739e914812a4826140d56050483e716f53c43442db2e60afacd8dd7aac77412" exitCode=0 Jan 21 11:12:21 crc kubenswrapper[4745]: I0121 11:12:21.934780 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" event={"ID":"e125e636-26d3-49e3-9e06-2c1a3cd106c9","Type":"ContainerDied","Data":"1739e914812a4826140d56050483e716f53c43442db2e60afacd8dd7aac77412"} Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.487934 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.606318 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tblgz\" (UniqueName: \"kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz\") pod \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.606407 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory\") pod \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.606739 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam\") pod \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\" (UID: \"e125e636-26d3-49e3-9e06-2c1a3cd106c9\") " Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.626881 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz" (OuterVolumeSpecName: "kube-api-access-tblgz") pod "e125e636-26d3-49e3-9e06-2c1a3cd106c9" (UID: "e125e636-26d3-49e3-9e06-2c1a3cd106c9"). InnerVolumeSpecName "kube-api-access-tblgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.643587 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory" (OuterVolumeSpecName: "inventory") pod "e125e636-26d3-49e3-9e06-2c1a3cd106c9" (UID: "e125e636-26d3-49e3-9e06-2c1a3cd106c9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.648986 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e125e636-26d3-49e3-9e06-2c1a3cd106c9" (UID: "e125e636-26d3-49e3-9e06-2c1a3cd106c9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.709035 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.709304 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tblgz\" (UniqueName: \"kubernetes.io/projected/e125e636-26d3-49e3-9e06-2c1a3cd106c9-kube-api-access-tblgz\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.709393 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e125e636-26d3-49e3-9e06-2c1a3cd106c9-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.953893 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" event={"ID":"e125e636-26d3-49e3-9e06-2c1a3cd106c9","Type":"ContainerDied","Data":"9dbb03877cb71685449042db7c228d56e5450de062f67ce7952882cc491c3077"} Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.953941 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9dbb03877cb71685449042db7c228d56e5450de062f67ce7952882cc491c3077" Jan 21 11:12:23 crc kubenswrapper[4745]: I0121 11:12:23.953998 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-652rw" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.046073 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv"] Jan 21 11:12:24 crc kubenswrapper[4745]: E0121 11:12:24.046588 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e125e636-26d3-49e3-9e06-2c1a3cd106c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.046610 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e125e636-26d3-49e3-9e06-2c1a3cd106c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.046852 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e125e636-26d3-49e3-9e06-2c1a3cd106c9" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.047708 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.054044 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.054416 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.055613 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.055735 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.063586 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv"] Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.224579 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.225002 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnfpn\" (UniqueName: \"kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.225168 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.326595 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.326674 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnfpn\" (UniqueName: \"kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.327073 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.331018 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.331479 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.348307 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnfpn\" (UniqueName: \"kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-tq4jv\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:24 crc kubenswrapper[4745]: I0121 11:12:24.373282 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:12:25 crc kubenswrapper[4745]: I0121 11:12:25.372472 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv"] Jan 21 11:12:25 crc kubenswrapper[4745]: I0121 11:12:25.973821 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" event={"ID":"73beacee-28b3-46c4-8643-74e53002ef5e","Type":"ContainerStarted","Data":"b80620a0bfc29f1ed18bb928346a4a4732e118313e928bcace7d9fe8db65c413"} Jan 21 11:12:26 crc kubenswrapper[4745]: I0121 11:12:26.981910 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" event={"ID":"73beacee-28b3-46c4-8643-74e53002ef5e","Type":"ContainerStarted","Data":"4ff0af1675b7fd02ab8b3fe7650ab129a735f57b096c7f803d8cd0cd7a33f3e4"} Jan 21 11:12:27 crc kubenswrapper[4745]: I0121 11:12:27.011902 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" podStartSLOduration=2.32269363 podStartE2EDuration="3.011884111s" podCreationTimestamp="2026-01-21 11:12:24 +0000 UTC" firstStartedPulling="2026-01-21 11:12:25.375414162 +0000 UTC m=+2129.836201760" lastFinishedPulling="2026-01-21 11:12:26.064604653 +0000 UTC m=+2130.525392241" observedRunningTime="2026-01-21 11:12:26.999871985 +0000 UTC m=+2131.460659583" watchObservedRunningTime="2026-01-21 11:12:27.011884111 +0000 UTC m=+2131.472671709" Jan 21 11:12:30 crc kubenswrapper[4745]: I0121 11:12:30.074123 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ctsc9" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" probeResult="failure" output=< Jan 21 11:12:30 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:12:30 crc kubenswrapper[4745]: > Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.013302 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.026588 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.026749 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.162470 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.162677 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.162912 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqp5p\" (UniqueName: \"kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.264863 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.265437 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.265488 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.265900 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.266271 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqp5p\" (UniqueName: \"kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.296051 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqp5p\" (UniqueName: \"kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p\") pod \"redhat-marketplace-pxwxp\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.369176 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:38 crc kubenswrapper[4745]: I0121 11:12:38.933772 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:39 crc kubenswrapper[4745]: I0121 11:12:39.111332 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerStarted","Data":"cd8e15a34a5a47ea4a9869d31a3717493bc147bf153058e5a248463ccef0e912"} Jan 21 11:12:39 crc kubenswrapper[4745]: I0121 11:12:39.136242 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:39 crc kubenswrapper[4745]: I0121 11:12:39.216141 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:40 crc kubenswrapper[4745]: I0121 11:12:40.123754 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerID="54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78" exitCode=0 Jan 21 11:12:40 crc kubenswrapper[4745]: I0121 11:12:40.124545 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerDied","Data":"54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78"} Jan 21 11:12:41 crc kubenswrapper[4745]: I0121 11:12:41.568808 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:12:41 crc kubenswrapper[4745]: I0121 11:12:41.570129 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ctsc9" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" containerID="cri-o://5c43c3ed9dbb2f1afa0d497bcda2b2ece1acbf500fe7a067bf7717c2456b1e46" gracePeriod=2 Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.150598 4745 generic.go:334] "Generic (PLEG): container finished" podID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerID="5c43c3ed9dbb2f1afa0d497bcda2b2ece1acbf500fe7a067bf7717c2456b1e46" exitCode=0 Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.150665 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerDied","Data":"5c43c3ed9dbb2f1afa0d497bcda2b2ece1acbf500fe7a067bf7717c2456b1e46"} Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.153273 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerID="90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63" exitCode=0 Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.153373 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerDied","Data":"90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63"} Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.627283 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.765754 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities\") pod \"2be8ab68-c717-4837-87e1-e3a72e95525c\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.765845 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content\") pod \"2be8ab68-c717-4837-87e1-e3a72e95525c\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.766039 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7n86\" (UniqueName: \"kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86\") pod \"2be8ab68-c717-4837-87e1-e3a72e95525c\" (UID: \"2be8ab68-c717-4837-87e1-e3a72e95525c\") " Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.766899 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities" (OuterVolumeSpecName: "utilities") pod "2be8ab68-c717-4837-87e1-e3a72e95525c" (UID: "2be8ab68-c717-4837-87e1-e3a72e95525c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.798868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86" (OuterVolumeSpecName: "kube-api-access-n7n86") pod "2be8ab68-c717-4837-87e1-e3a72e95525c" (UID: "2be8ab68-c717-4837-87e1-e3a72e95525c"). InnerVolumeSpecName "kube-api-access-n7n86". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.869053 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.869105 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7n86\" (UniqueName: \"kubernetes.io/projected/2be8ab68-c717-4837-87e1-e3a72e95525c-kube-api-access-n7n86\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.910636 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2be8ab68-c717-4837-87e1-e3a72e95525c" (UID: "2be8ab68-c717-4837-87e1-e3a72e95525c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:42 crc kubenswrapper[4745]: I0121 11:12:42.971301 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2be8ab68-c717-4837-87e1-e3a72e95525c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.165501 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ctsc9" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.165484 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ctsc9" event={"ID":"2be8ab68-c717-4837-87e1-e3a72e95525c","Type":"ContainerDied","Data":"a6cd8b9ec6f06f26c5e8c57493c10c3e53cb467cd14b05c0a1fbc66b06a70c42"} Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.166108 4745 scope.go:117] "RemoveContainer" containerID="5c43c3ed9dbb2f1afa0d497bcda2b2ece1acbf500fe7a067bf7717c2456b1e46" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.168270 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerStarted","Data":"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445"} Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.216212 4745 scope.go:117] "RemoveContainer" containerID="bc45c99679746445d7048f63f815cd547933bdf9606d1c1f025a0e37eecddace" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.229146 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pxwxp" podStartSLOduration=3.756829478 podStartE2EDuration="6.229122149s" podCreationTimestamp="2026-01-21 11:12:37 +0000 UTC" firstStartedPulling="2026-01-21 11:12:40.126975307 +0000 UTC m=+2144.587762905" lastFinishedPulling="2026-01-21 11:12:42.599267978 +0000 UTC m=+2147.060055576" observedRunningTime="2026-01-21 11:12:43.209205018 +0000 UTC m=+2147.669992626" watchObservedRunningTime="2026-01-21 11:12:43.229122149 +0000 UTC m=+2147.689909747" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.273333 4745 scope.go:117] "RemoveContainer" containerID="4ea6a191ac130174dae28f7dbba3ce394d2b8a7a50502a3a995080f25dd6a6bd" Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.281712 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:12:43 crc kubenswrapper[4745]: I0121 11:12:43.293302 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ctsc9"] Jan 21 11:12:44 crc kubenswrapper[4745]: I0121 11:12:44.015880 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" path="/var/lib/kubelet/pods/2be8ab68-c717-4837-87e1-e3a72e95525c/volumes" Jan 21 11:12:45 crc kubenswrapper[4745]: I0121 11:12:45.867030 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:12:45 crc kubenswrapper[4745]: I0121 11:12:45.867474 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:12:48 crc kubenswrapper[4745]: I0121 11:12:48.370648 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:48 crc kubenswrapper[4745]: I0121 11:12:48.371049 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:48 crc kubenswrapper[4745]: I0121 11:12:48.430012 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:49 crc kubenswrapper[4745]: I0121 11:12:49.287626 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:49 crc kubenswrapper[4745]: I0121 11:12:49.372447 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.248175 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pxwxp" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="registry-server" containerID="cri-o://42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445" gracePeriod=2 Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.819511 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.934330 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities\") pod \"7d1683a6-5c39-423a-a947-49cc09c30a79\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.934765 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content\") pod \"7d1683a6-5c39-423a-a947-49cc09c30a79\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.934801 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqp5p\" (UniqueName: \"kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p\") pod \"7d1683a6-5c39-423a-a947-49cc09c30a79\" (UID: \"7d1683a6-5c39-423a-a947-49cc09c30a79\") " Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.935706 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities" (OuterVolumeSpecName: "utilities") pod "7d1683a6-5c39-423a-a947-49cc09c30a79" (UID: "7d1683a6-5c39-423a-a947-49cc09c30a79"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.951362 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p" (OuterVolumeSpecName: "kube-api-access-kqp5p") pod "7d1683a6-5c39-423a-a947-49cc09c30a79" (UID: "7d1683a6-5c39-423a-a947-49cc09c30a79"). InnerVolumeSpecName "kube-api-access-kqp5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:12:51 crc kubenswrapper[4745]: I0121 11:12:51.967677 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d1683a6-5c39-423a-a947-49cc09c30a79" (UID: "7d1683a6-5c39-423a-a947-49cc09c30a79"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.038193 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.038250 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqp5p\" (UniqueName: \"kubernetes.io/projected/7d1683a6-5c39-423a-a947-49cc09c30a79-kube-api-access-kqp5p\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.038267 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d1683a6-5c39-423a-a947-49cc09c30a79-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.260761 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerID="42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445" exitCode=0 Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.260838 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerDied","Data":"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445"} Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.260885 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pxwxp" event={"ID":"7d1683a6-5c39-423a-a947-49cc09c30a79","Type":"ContainerDied","Data":"cd8e15a34a5a47ea4a9869d31a3717493bc147bf153058e5a248463ccef0e912"} Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.260918 4745 scope.go:117] "RemoveContainer" containerID="42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.260947 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pxwxp" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.298885 4745 scope.go:117] "RemoveContainer" containerID="90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.302442 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.320749 4745 scope.go:117] "RemoveContainer" containerID="54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.324551 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pxwxp"] Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.378913 4745 scope.go:117] "RemoveContainer" containerID="42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445" Jan 21 11:12:52 crc kubenswrapper[4745]: E0121 11:12:52.379847 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445\": container with ID starting with 42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445 not found: ID does not exist" containerID="42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.379894 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445"} err="failed to get container status \"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445\": rpc error: code = NotFound desc = could not find container \"42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445\": container with ID starting with 42179ae0ffbdc4c2685c0bb660672e77559c77381eb71ef91912d1479b46b445 not found: ID does not exist" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.379924 4745 scope.go:117] "RemoveContainer" containerID="90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63" Jan 21 11:12:52 crc kubenswrapper[4745]: E0121 11:12:52.380557 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63\": container with ID starting with 90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63 not found: ID does not exist" containerID="90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.380590 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63"} err="failed to get container status \"90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63\": rpc error: code = NotFound desc = could not find container \"90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63\": container with ID starting with 90c82c2a767e0f62b435aa20f4b5022a7b40f882223d87aa85b44570786aaa63 not found: ID does not exist" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.380608 4745 scope.go:117] "RemoveContainer" containerID="54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78" Jan 21 11:12:52 crc kubenswrapper[4745]: E0121 11:12:52.381064 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78\": container with ID starting with 54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78 not found: ID does not exist" containerID="54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78" Jan 21 11:12:52 crc kubenswrapper[4745]: I0121 11:12:52.381092 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78"} err="failed to get container status \"54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78\": rpc error: code = NotFound desc = could not find container \"54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78\": container with ID starting with 54ae6d9478cc864dc2c7480f573a20e1d18f4264b28b16cf69e287c868339c78 not found: ID does not exist" Jan 21 11:12:54 crc kubenswrapper[4745]: I0121 11:12:54.015698 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" path="/var/lib/kubelet/pods/7d1683a6-5c39-423a-a947-49cc09c30a79/volumes" Jan 21 11:13:10 crc kubenswrapper[4745]: I0121 11:13:10.447242 4745 generic.go:334] "Generic (PLEG): container finished" podID="73beacee-28b3-46c4-8643-74e53002ef5e" containerID="4ff0af1675b7fd02ab8b3fe7650ab129a735f57b096c7f803d8cd0cd7a33f3e4" exitCode=0 Jan 21 11:13:10 crc kubenswrapper[4745]: I0121 11:13:10.447364 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" event={"ID":"73beacee-28b3-46c4-8643-74e53002ef5e","Type":"ContainerDied","Data":"4ff0af1675b7fd02ab8b3fe7650ab129a735f57b096c7f803d8cd0cd7a33f3e4"} Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.128127 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.318493 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory\") pod \"73beacee-28b3-46c4-8643-74e53002ef5e\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.318557 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam\") pod \"73beacee-28b3-46c4-8643-74e53002ef5e\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.318595 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnfpn\" (UniqueName: \"kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn\") pod \"73beacee-28b3-46c4-8643-74e53002ef5e\" (UID: \"73beacee-28b3-46c4-8643-74e53002ef5e\") " Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.327780 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn" (OuterVolumeSpecName: "kube-api-access-xnfpn") pod "73beacee-28b3-46c4-8643-74e53002ef5e" (UID: "73beacee-28b3-46c4-8643-74e53002ef5e"). InnerVolumeSpecName "kube-api-access-xnfpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.353736 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "73beacee-28b3-46c4-8643-74e53002ef5e" (UID: "73beacee-28b3-46c4-8643-74e53002ef5e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.354189 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory" (OuterVolumeSpecName: "inventory") pod "73beacee-28b3-46c4-8643-74e53002ef5e" (UID: "73beacee-28b3-46c4-8643-74e53002ef5e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.421085 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.421134 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/73beacee-28b3-46c4-8643-74e53002ef5e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.421145 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnfpn\" (UniqueName: \"kubernetes.io/projected/73beacee-28b3-46c4-8643-74e53002ef5e-kube-api-access-xnfpn\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.476885 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" event={"ID":"73beacee-28b3-46c4-8643-74e53002ef5e","Type":"ContainerDied","Data":"b80620a0bfc29f1ed18bb928346a4a4732e118313e928bcace7d9fe8db65c413"} Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.476961 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b80620a0bfc29f1ed18bb928346a4a4732e118313e928bcace7d9fe8db65c413" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.477015 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-tq4jv" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.566874 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7"] Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567272 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567291 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567311 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="extract-utilities" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567317 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="extract-utilities" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567331 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="extract-content" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567337 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="extract-content" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567344 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="extract-content" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567350 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="extract-content" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567359 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="extract-utilities" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567365 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="extract-utilities" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567376 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73beacee-28b3-46c4-8643-74e53002ef5e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567383 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="73beacee-28b3-46c4-8643-74e53002ef5e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:13:12 crc kubenswrapper[4745]: E0121 11:13:12.567404 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567409 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567676 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1683a6-5c39-423a-a947-49cc09c30a79" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567695 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be8ab68-c717-4837-87e1-e3a72e95525c" containerName="registry-server" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.567710 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="73beacee-28b3-46c4-8643-74e53002ef5e" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.568359 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.576730 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7"] Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.577199 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.577269 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.577613 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.577802 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.728082 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwvv5\" (UniqueName: \"kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.728576 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.728640 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.830865 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwvv5\" (UniqueName: \"kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.831012 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.831082 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.837976 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.840935 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.863397 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwvv5\" (UniqueName: \"kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-blxw7\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:12 crc kubenswrapper[4745]: I0121 11:13:12.893189 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:13:13 crc kubenswrapper[4745]: I0121 11:13:13.787054 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7"] Jan 21 11:13:13 crc kubenswrapper[4745]: W0121 11:13:13.798834 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07b8c861_e874_4967_871b_5c6ca50791fa.slice/crio-6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f WatchSource:0}: Error finding container 6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f: Status 404 returned error can't find the container with id 6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f Jan 21 11:13:14 crc kubenswrapper[4745]: I0121 11:13:14.491943 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" event={"ID":"07b8c861-e874-4967-871b-5c6ca50791fa","Type":"ContainerStarted","Data":"6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f"} Jan 21 11:13:15 crc kubenswrapper[4745]: I0121 11:13:15.504213 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" event={"ID":"07b8c861-e874-4967-871b-5c6ca50791fa","Type":"ContainerStarted","Data":"f50038b8b83e44d173cc14a9402368c13474bc5b2ccccd22bcb905c65c1abd6e"} Jan 21 11:13:15 crc kubenswrapper[4745]: I0121 11:13:15.866174 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:15 crc kubenswrapper[4745]: I0121 11:13:15.866458 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.893710 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" podStartSLOduration=22.463889704 podStartE2EDuration="23.893676352s" podCreationTimestamp="2026-01-21 11:13:12 +0000 UTC" firstStartedPulling="2026-01-21 11:13:13.80072489 +0000 UTC m=+2178.261512488" lastFinishedPulling="2026-01-21 11:13:15.230511538 +0000 UTC m=+2179.691299136" observedRunningTime="2026-01-21 11:13:15.52451445 +0000 UTC m=+2179.985302058" watchObservedRunningTime="2026-01-21 11:13:35.893676352 +0000 UTC m=+2200.354463970" Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.899992 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.903072 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.908802 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.909875 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.909939 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbvnz\" (UniqueName: \"kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:35 crc kubenswrapper[4745]: I0121 11:13:35.910068 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.011173 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lbvnz\" (UniqueName: \"kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.011389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.011511 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.012120 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.012377 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.046887 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbvnz\" (UniqueName: \"kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz\") pod \"certified-operators-mm9c9\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.229872 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:36 crc kubenswrapper[4745]: I0121 11:13:36.727408 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:37 crc kubenswrapper[4745]: I0121 11:13:37.695064 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerDied","Data":"030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26"} Jan 21 11:13:37 crc kubenswrapper[4745]: I0121 11:13:37.695030 4745 generic.go:334] "Generic (PLEG): container finished" podID="34e84ae5-f336-4d47-9e85-e790c2854705" containerID="030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26" exitCode=0 Jan 21 11:13:37 crc kubenswrapper[4745]: I0121 11:13:37.695828 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerStarted","Data":"044d813d5fa6f38727ac92abeffcc17770c53a5fe8f65358dcdef441c00c6491"} Jan 21 11:13:38 crc kubenswrapper[4745]: I0121 11:13:38.709846 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerStarted","Data":"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e"} Jan 21 11:13:39 crc kubenswrapper[4745]: I0121 11:13:39.725977 4745 generic.go:334] "Generic (PLEG): container finished" podID="34e84ae5-f336-4d47-9e85-e790c2854705" containerID="6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e" exitCode=0 Jan 21 11:13:39 crc kubenswrapper[4745]: I0121 11:13:39.726714 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerDied","Data":"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e"} Jan 21 11:13:40 crc kubenswrapper[4745]: I0121 11:13:40.740782 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerStarted","Data":"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc"} Jan 21 11:13:40 crc kubenswrapper[4745]: I0121 11:13:40.767505 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mm9c9" podStartSLOduration=3.395932139 podStartE2EDuration="5.767473764s" podCreationTimestamp="2026-01-21 11:13:35 +0000 UTC" firstStartedPulling="2026-01-21 11:13:37.743758382 +0000 UTC m=+2202.204545980" lastFinishedPulling="2026-01-21 11:13:40.115300007 +0000 UTC m=+2204.576087605" observedRunningTime="2026-01-21 11:13:40.761485741 +0000 UTC m=+2205.222273339" watchObservedRunningTime="2026-01-21 11:13:40.767473764 +0000 UTC m=+2205.228261362" Jan 21 11:13:45 crc kubenswrapper[4745]: I0121 11:13:45.866405 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:13:45 crc kubenswrapper[4745]: I0121 11:13:45.867376 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:13:45 crc kubenswrapper[4745]: I0121 11:13:45.867441 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:13:45 crc kubenswrapper[4745]: I0121 11:13:45.868343 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:13:45 crc kubenswrapper[4745]: I0121 11:13:45.868402 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a" gracePeriod=600 Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.231517 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.232130 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.286416 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.801043 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a" exitCode=0 Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.801123 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a"} Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.801505 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628"} Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.801605 4745 scope.go:117] "RemoveContainer" containerID="54c8304c1538fbdc9c36ea126e5a911983c4ae6651509d49a8e902a8a824f908" Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.864461 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:46 crc kubenswrapper[4745]: I0121 11:13:46.925340 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:48 crc kubenswrapper[4745]: I0121 11:13:48.823867 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mm9c9" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="registry-server" containerID="cri-o://3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc" gracePeriod=2 Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.420893 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.547955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content\") pod \"34e84ae5-f336-4d47-9e85-e790c2854705\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.548056 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities\") pod \"34e84ae5-f336-4d47-9e85-e790c2854705\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.548151 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbvnz\" (UniqueName: \"kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz\") pod \"34e84ae5-f336-4d47-9e85-e790c2854705\" (UID: \"34e84ae5-f336-4d47-9e85-e790c2854705\") " Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.549249 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities" (OuterVolumeSpecName: "utilities") pod "34e84ae5-f336-4d47-9e85-e790c2854705" (UID: "34e84ae5-f336-4d47-9e85-e790c2854705"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.555950 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz" (OuterVolumeSpecName: "kube-api-access-lbvnz") pod "34e84ae5-f336-4d47-9e85-e790c2854705" (UID: "34e84ae5-f336-4d47-9e85-e790c2854705"). InnerVolumeSpecName "kube-api-access-lbvnz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.600633 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34e84ae5-f336-4d47-9e85-e790c2854705" (UID: "34e84ae5-f336-4d47-9e85-e790c2854705"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.651039 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.651077 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lbvnz\" (UniqueName: \"kubernetes.io/projected/34e84ae5-f336-4d47-9e85-e790c2854705-kube-api-access-lbvnz\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.651113 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34e84ae5-f336-4d47-9e85-e790c2854705-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.837770 4745 generic.go:334] "Generic (PLEG): container finished" podID="34e84ae5-f336-4d47-9e85-e790c2854705" containerID="3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc" exitCode=0 Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.837839 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerDied","Data":"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc"} Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.838116 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mm9c9" event={"ID":"34e84ae5-f336-4d47-9e85-e790c2854705","Type":"ContainerDied","Data":"044d813d5fa6f38727ac92abeffcc17770c53a5fe8f65358dcdef441c00c6491"} Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.838144 4745 scope.go:117] "RemoveContainer" containerID="3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.837922 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mm9c9" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.865962 4745 scope.go:117] "RemoveContainer" containerID="6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.873794 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.884590 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mm9c9"] Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.897324 4745 scope.go:117] "RemoveContainer" containerID="030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.943981 4745 scope.go:117] "RemoveContainer" containerID="3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc" Jan 21 11:13:49 crc kubenswrapper[4745]: E0121 11:13:49.944611 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc\": container with ID starting with 3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc not found: ID does not exist" containerID="3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.944665 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc"} err="failed to get container status \"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc\": rpc error: code = NotFound desc = could not find container \"3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc\": container with ID starting with 3da01d51e3b161cd6a26cff00739c7e05d7f046307dbc3b95d503fc7fa0897dc not found: ID does not exist" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.944712 4745 scope.go:117] "RemoveContainer" containerID="6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e" Jan 21 11:13:49 crc kubenswrapper[4745]: E0121 11:13:49.945327 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e\": container with ID starting with 6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e not found: ID does not exist" containerID="6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.945356 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e"} err="failed to get container status \"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e\": rpc error: code = NotFound desc = could not find container \"6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e\": container with ID starting with 6f4d1713074ff0c2f32ddc4439680f360c8cab0b6bfa1b0b6f8a2e24a688936e not found: ID does not exist" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.945376 4745 scope.go:117] "RemoveContainer" containerID="030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26" Jan 21 11:13:49 crc kubenswrapper[4745]: E0121 11:13:49.945926 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26\": container with ID starting with 030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26 not found: ID does not exist" containerID="030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26" Jan 21 11:13:49 crc kubenswrapper[4745]: I0121 11:13:49.946055 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26"} err="failed to get container status \"030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26\": rpc error: code = NotFound desc = could not find container \"030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26\": container with ID starting with 030071fff23c8ef30ceef7ac2ad7d0b5a5354202b67f293c9a0178e3e75a4f26 not found: ID does not exist" Jan 21 11:13:50 crc kubenswrapper[4745]: I0121 11:13:50.016576 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" path="/var/lib/kubelet/pods/34e84ae5-f336-4d47-9e85-e790c2854705/volumes" Jan 21 11:14:16 crc kubenswrapper[4745]: I0121 11:14:16.146755 4745 generic.go:334] "Generic (PLEG): container finished" podID="07b8c861-e874-4967-871b-5c6ca50791fa" containerID="f50038b8b83e44d173cc14a9402368c13474bc5b2ccccd22bcb905c65c1abd6e" exitCode=0 Jan 21 11:14:16 crc kubenswrapper[4745]: I0121 11:14:16.146830 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" event={"ID":"07b8c861-e874-4967-871b-5c6ca50791fa","Type":"ContainerDied","Data":"f50038b8b83e44d173cc14a9402368c13474bc5b2ccccd22bcb905c65c1abd6e"} Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.833360 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.924392 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwvv5\" (UniqueName: \"kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5\") pod \"07b8c861-e874-4967-871b-5c6ca50791fa\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.924883 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam\") pod \"07b8c861-e874-4967-871b-5c6ca50791fa\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.924961 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory\") pod \"07b8c861-e874-4967-871b-5c6ca50791fa\" (UID: \"07b8c861-e874-4967-871b-5c6ca50791fa\") " Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.936981 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5" (OuterVolumeSpecName: "kube-api-access-lwvv5") pod "07b8c861-e874-4967-871b-5c6ca50791fa" (UID: "07b8c861-e874-4967-871b-5c6ca50791fa"). InnerVolumeSpecName "kube-api-access-lwvv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.963936 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory" (OuterVolumeSpecName: "inventory") pod "07b8c861-e874-4967-871b-5c6ca50791fa" (UID: "07b8c861-e874-4967-871b-5c6ca50791fa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:17 crc kubenswrapper[4745]: I0121 11:14:17.965388 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07b8c861-e874-4967-871b-5c6ca50791fa" (UID: "07b8c861-e874-4967-871b-5c6ca50791fa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.026986 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwvv5\" (UniqueName: \"kubernetes.io/projected/07b8c861-e874-4967-871b-5c6ca50791fa-kube-api-access-lwvv5\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.027023 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.027033 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07b8c861-e874-4967-871b-5c6ca50791fa-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.168516 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" event={"ID":"07b8c861-e874-4967-871b-5c6ca50791fa","Type":"ContainerDied","Data":"6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f"} Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.168580 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-blxw7" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.168581 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0c7b0d89c3615c6c49873320f8cd911794f5a46c228b299fcb13e183b68f3f" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285125 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gpd6q"] Jan 21 11:14:18 crc kubenswrapper[4745]: E0121 11:14:18.285543 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07b8c861-e874-4967-871b-5c6ca50791fa" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285556 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="07b8c861-e874-4967-871b-5c6ca50791fa" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:18 crc kubenswrapper[4745]: E0121 11:14:18.285571 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="extract-utilities" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285577 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="extract-utilities" Jan 21 11:14:18 crc kubenswrapper[4745]: E0121 11:14:18.285590 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="extract-content" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285595 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="extract-content" Jan 21 11:14:18 crc kubenswrapper[4745]: E0121 11:14:18.285624 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="registry-server" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285629 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="registry-server" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285796 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="07b8c861-e874-4967-871b-5c6ca50791fa" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.285812 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="34e84ae5-f336-4d47-9e85-e790c2854705" containerName="registry-server" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.286399 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.289438 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.292173 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.292258 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.292344 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.309797 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gpd6q"] Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.333203 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2m6k\" (UniqueName: \"kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.333490 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.333648 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.436169 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2m6k\" (UniqueName: \"kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.436289 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.436359 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.441625 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.448898 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.458362 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2m6k\" (UniqueName: \"kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k\") pod \"ssh-known-hosts-edpm-deployment-gpd6q\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:18 crc kubenswrapper[4745]: I0121 11:14:18.610001 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:19 crc kubenswrapper[4745]: I0121 11:14:19.208371 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gpd6q"] Jan 21 11:14:20 crc kubenswrapper[4745]: I0121 11:14:20.187980 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" event={"ID":"548fb3fd-319e-4b59-a233-afbb48300c3b","Type":"ContainerStarted","Data":"415ada3a1bf2608e6f1daae7b15e9f204c08caf8230db365c9d97488db875ffa"} Jan 21 11:14:21 crc kubenswrapper[4745]: I0121 11:14:21.200785 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" event={"ID":"548fb3fd-319e-4b59-a233-afbb48300c3b","Type":"ContainerStarted","Data":"b946594452b338cd3802ce996707d33b9586dd85e2a7c13c70ed58642f5de67b"} Jan 21 11:14:21 crc kubenswrapper[4745]: I0121 11:14:21.228586 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" podStartSLOduration=2.036012153 podStartE2EDuration="3.22855548s" podCreationTimestamp="2026-01-21 11:14:18 +0000 UTC" firstStartedPulling="2026-01-21 11:14:19.221257014 +0000 UTC m=+2243.682044612" lastFinishedPulling="2026-01-21 11:14:20.413800341 +0000 UTC m=+2244.874587939" observedRunningTime="2026-01-21 11:14:21.220976125 +0000 UTC m=+2245.681763723" watchObservedRunningTime="2026-01-21 11:14:21.22855548 +0000 UTC m=+2245.689343078" Jan 21 11:14:29 crc kubenswrapper[4745]: I0121 11:14:29.300090 4745 generic.go:334] "Generic (PLEG): container finished" podID="548fb3fd-319e-4b59-a233-afbb48300c3b" containerID="b946594452b338cd3802ce996707d33b9586dd85e2a7c13c70ed58642f5de67b" exitCode=0 Jan 21 11:14:29 crc kubenswrapper[4745]: I0121 11:14:29.300210 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" event={"ID":"548fb3fd-319e-4b59-a233-afbb48300c3b","Type":"ContainerDied","Data":"b946594452b338cd3802ce996707d33b9586dd85e2a7c13c70ed58642f5de67b"} Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.810127 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.898690 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam\") pod \"548fb3fd-319e-4b59-a233-afbb48300c3b\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.898759 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0\") pod \"548fb3fd-319e-4b59-a233-afbb48300c3b\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.898836 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2m6k\" (UniqueName: \"kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k\") pod \"548fb3fd-319e-4b59-a233-afbb48300c3b\" (UID: \"548fb3fd-319e-4b59-a233-afbb48300c3b\") " Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.906359 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k" (OuterVolumeSpecName: "kube-api-access-g2m6k") pod "548fb3fd-319e-4b59-a233-afbb48300c3b" (UID: "548fb3fd-319e-4b59-a233-afbb48300c3b"). InnerVolumeSpecName "kube-api-access-g2m6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.928065 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "548fb3fd-319e-4b59-a233-afbb48300c3b" (UID: "548fb3fd-319e-4b59-a233-afbb48300c3b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:30 crc kubenswrapper[4745]: I0121 11:14:30.938285 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "548fb3fd-319e-4b59-a233-afbb48300c3b" (UID: "548fb3fd-319e-4b59-a233-afbb48300c3b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.000712 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2m6k\" (UniqueName: \"kubernetes.io/projected/548fb3fd-319e-4b59-a233-afbb48300c3b-kube-api-access-g2m6k\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.000755 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.000765 4745 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/548fb3fd-319e-4b59-a233-afbb48300c3b-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.322344 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" event={"ID":"548fb3fd-319e-4b59-a233-afbb48300c3b","Type":"ContainerDied","Data":"415ada3a1bf2608e6f1daae7b15e9f204c08caf8230db365c9d97488db875ffa"} Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.322397 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415ada3a1bf2608e6f1daae7b15e9f204c08caf8230db365c9d97488db875ffa" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.322414 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gpd6q" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.432673 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s"] Jan 21 11:14:31 crc kubenswrapper[4745]: E0121 11:14:31.433423 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="548fb3fd-319e-4b59-a233-afbb48300c3b" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.433448 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="548fb3fd-319e-4b59-a233-afbb48300c3b" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.433723 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="548fb3fd-319e-4b59-a233-afbb48300c3b" containerName="ssh-known-hosts-edpm-deployment" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.434947 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.443134 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.443618 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.443888 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.444613 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.448030 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s"] Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.512116 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f966\" (UniqueName: \"kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.512237 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.512278 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.615205 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.615668 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.615833 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6f966\" (UniqueName: \"kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.621179 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.621212 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.642801 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6f966\" (UniqueName: \"kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q889s\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:31 crc kubenswrapper[4745]: I0121 11:14:31.772357 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:32 crc kubenswrapper[4745]: I0121 11:14:32.402105 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s"] Jan 21 11:14:33 crc kubenswrapper[4745]: I0121 11:14:33.340983 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" event={"ID":"6166dc52-4171-488c-99bb-f522c631efb0","Type":"ContainerStarted","Data":"0c75e23ae7eb5e0792ee66b2a1a8c77aa90d4a9ae2ea0bdb27c00520028bfa55"} Jan 21 11:14:34 crc kubenswrapper[4745]: I0121 11:14:34.355712 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" event={"ID":"6166dc52-4171-488c-99bb-f522c631efb0","Type":"ContainerStarted","Data":"b4ccfe0fe65ed466ab84b7991ee668ade585e133db23d581cd6d48713e1e9542"} Jan 21 11:14:34 crc kubenswrapper[4745]: I0121 11:14:34.391611 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" podStartSLOduration=2.640202098 podStartE2EDuration="3.391591937s" podCreationTimestamp="2026-01-21 11:14:31 +0000 UTC" firstStartedPulling="2026-01-21 11:14:32.413629367 +0000 UTC m=+2256.874416975" lastFinishedPulling="2026-01-21 11:14:33.165019216 +0000 UTC m=+2257.625806814" observedRunningTime="2026-01-21 11:14:34.388944145 +0000 UTC m=+2258.849731763" watchObservedRunningTime="2026-01-21 11:14:34.391591937 +0000 UTC m=+2258.852379545" Jan 21 11:14:42 crc kubenswrapper[4745]: I0121 11:14:42.434886 4745 generic.go:334] "Generic (PLEG): container finished" podID="6166dc52-4171-488c-99bb-f522c631efb0" containerID="b4ccfe0fe65ed466ab84b7991ee668ade585e133db23d581cd6d48713e1e9542" exitCode=0 Jan 21 11:14:42 crc kubenswrapper[4745]: I0121 11:14:42.435001 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" event={"ID":"6166dc52-4171-488c-99bb-f522c631efb0","Type":"ContainerDied","Data":"b4ccfe0fe65ed466ab84b7991ee668ade585e133db23d581cd6d48713e1e9542"} Jan 21 11:14:43 crc kubenswrapper[4745]: I0121 11:14:43.870416 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.027879 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6f966\" (UniqueName: \"kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966\") pod \"6166dc52-4171-488c-99bb-f522c631efb0\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.028257 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam\") pod \"6166dc52-4171-488c-99bb-f522c631efb0\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.028341 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory\") pod \"6166dc52-4171-488c-99bb-f522c631efb0\" (UID: \"6166dc52-4171-488c-99bb-f522c631efb0\") " Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.036756 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966" (OuterVolumeSpecName: "kube-api-access-6f966") pod "6166dc52-4171-488c-99bb-f522c631efb0" (UID: "6166dc52-4171-488c-99bb-f522c631efb0"). InnerVolumeSpecName "kube-api-access-6f966". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.072556 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory" (OuterVolumeSpecName: "inventory") pod "6166dc52-4171-488c-99bb-f522c631efb0" (UID: "6166dc52-4171-488c-99bb-f522c631efb0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.096265 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6166dc52-4171-488c-99bb-f522c631efb0" (UID: "6166dc52-4171-488c-99bb-f522c631efb0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.132273 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6f966\" (UniqueName: \"kubernetes.io/projected/6166dc52-4171-488c-99bb-f522c631efb0-kube-api-access-6f966\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.133681 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.133789 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6166dc52-4171-488c-99bb-f522c631efb0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.459851 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" event={"ID":"6166dc52-4171-488c-99bb-f522c631efb0","Type":"ContainerDied","Data":"0c75e23ae7eb5e0792ee66b2a1a8c77aa90d4a9ae2ea0bdb27c00520028bfa55"} Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.459999 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c75e23ae7eb5e0792ee66b2a1a8c77aa90d4a9ae2ea0bdb27c00520028bfa55" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.459900 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q889s" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.560498 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp"] Jan 21 11:14:44 crc kubenswrapper[4745]: E0121 11:14:44.560876 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6166dc52-4171-488c-99bb-f522c631efb0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.560893 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="6166dc52-4171-488c-99bb-f522c631efb0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.561059 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="6166dc52-4171-488c-99bb-f522c631efb0" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.561752 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.564078 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.565130 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.565428 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.565614 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.579403 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp"] Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.645736 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqfb\" (UniqueName: \"kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.645864 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.646421 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.748754 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.748858 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gqfb\" (UniqueName: \"kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.748938 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.753729 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.755978 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.777143 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gqfb\" (UniqueName: \"kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:44 crc kubenswrapper[4745]: I0121 11:14:44.888704 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:45 crc kubenswrapper[4745]: I0121 11:14:45.535083 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp"] Jan 21 11:14:46 crc kubenswrapper[4745]: I0121 11:14:46.488184 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" event={"ID":"9ae3b0e8-dabd-4b52-91c9-55d4695f4660","Type":"ContainerStarted","Data":"17f89596487fee812dd4bbf6c310b6979a44411fd4aeba7ab7f3cddbda9e43a6"} Jan 21 11:14:46 crc kubenswrapper[4745]: I0121 11:14:46.490661 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" event={"ID":"9ae3b0e8-dabd-4b52-91c9-55d4695f4660","Type":"ContainerStarted","Data":"2176edd7099f13922a2fc5f10eea96cb887e31c50174a6916b0394e9a013bed9"} Jan 21 11:14:46 crc kubenswrapper[4745]: I0121 11:14:46.510790 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" podStartSLOduration=1.995617631 podStartE2EDuration="2.510772748s" podCreationTimestamp="2026-01-21 11:14:44 +0000 UTC" firstStartedPulling="2026-01-21 11:14:45.535314762 +0000 UTC m=+2269.996102360" lastFinishedPulling="2026-01-21 11:14:46.050469879 +0000 UTC m=+2270.511257477" observedRunningTime="2026-01-21 11:14:46.503259913 +0000 UTC m=+2270.964047511" watchObservedRunningTime="2026-01-21 11:14:46.510772748 +0000 UTC m=+2270.971560486" Jan 21 11:14:57 crc kubenswrapper[4745]: I0121 11:14:57.615245 4745 generic.go:334] "Generic (PLEG): container finished" podID="9ae3b0e8-dabd-4b52-91c9-55d4695f4660" containerID="17f89596487fee812dd4bbf6c310b6979a44411fd4aeba7ab7f3cddbda9e43a6" exitCode=0 Jan 21 11:14:57 crc kubenswrapper[4745]: I0121 11:14:57.615759 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" event={"ID":"9ae3b0e8-dabd-4b52-91c9-55d4695f4660","Type":"ContainerDied","Data":"17f89596487fee812dd4bbf6c310b6979a44411fd4aeba7ab7f3cddbda9e43a6"} Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.123911 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.233427 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory\") pod \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.234171 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gqfb\" (UniqueName: \"kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb\") pod \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.234262 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam\") pod \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\" (UID: \"9ae3b0e8-dabd-4b52-91c9-55d4695f4660\") " Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.242195 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb" (OuterVolumeSpecName: "kube-api-access-5gqfb") pod "9ae3b0e8-dabd-4b52-91c9-55d4695f4660" (UID: "9ae3b0e8-dabd-4b52-91c9-55d4695f4660"). InnerVolumeSpecName "kube-api-access-5gqfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.270734 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9ae3b0e8-dabd-4b52-91c9-55d4695f4660" (UID: "9ae3b0e8-dabd-4b52-91c9-55d4695f4660"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.275146 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory" (OuterVolumeSpecName: "inventory") pod "9ae3b0e8-dabd-4b52-91c9-55d4695f4660" (UID: "9ae3b0e8-dabd-4b52-91c9-55d4695f4660"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.336780 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.336818 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gqfb\" (UniqueName: \"kubernetes.io/projected/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-kube-api-access-5gqfb\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.336828 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9ae3b0e8-dabd-4b52-91c9-55d4695f4660-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.641355 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" event={"ID":"9ae3b0e8-dabd-4b52-91c9-55d4695f4660","Type":"ContainerDied","Data":"2176edd7099f13922a2fc5f10eea96cb887e31c50174a6916b0394e9a013bed9"} Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.642031 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2176edd7099f13922a2fc5f10eea96cb887e31c50174a6916b0394e9a013bed9" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.641395 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.781717 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j"] Jan 21 11:14:59 crc kubenswrapper[4745]: E0121 11:14:59.782508 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ae3b0e8-dabd-4b52-91c9-55d4695f4660" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.782572 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ae3b0e8-dabd-4b52-91c9-55d4695f4660" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.782853 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ae3b0e8-dabd-4b52-91c9-55d4695f4660" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.784021 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.791069 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.791609 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.791818 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.791990 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.792429 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.792619 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.792885 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.794603 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.811477 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j"] Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.950409 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.950851 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.950925 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.950953 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.950987 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951016 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951119 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951146 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951184 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951230 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951260 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951287 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:14:59 crc kubenswrapper[4745]: I0121 11:14:59.951334 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jfb\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053413 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jfb\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053489 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053553 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053605 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053631 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053661 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053691 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053764 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053783 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053811 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053845 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053871 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.053895 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.061226 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.061664 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.064669 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.065096 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.066248 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.070468 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.071952 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.073088 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.074822 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.074939 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.078480 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.080431 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.088738 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jfb\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.091365 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-njk5j\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.106471 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.147135 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2"] Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.149858 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.152617 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.152849 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.173727 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2"] Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.258990 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.259087 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5z9\" (UniqueName: \"kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.259118 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.363639 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.364250 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5z9\" (UniqueName: \"kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.364284 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.365261 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.373733 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.390340 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5z9\" (UniqueName: \"kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9\") pod \"collect-profiles-29483235-5mmr2\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.568176 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:00 crc kubenswrapper[4745]: I0121 11:15:00.780812 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j"] Jan 21 11:15:01 crc kubenswrapper[4745]: I0121 11:15:01.087184 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2"] Jan 21 11:15:01 crc kubenswrapper[4745]: I0121 11:15:01.666085 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" event={"ID":"d6cc744e-e212-4893-9fcb-60a835f3d83d","Type":"ContainerStarted","Data":"ef6c8a271cbdb56df63572886515e9549dd928f532da9240c148b8a04273966a"} Jan 21 11:15:01 crc kubenswrapper[4745]: I0121 11:15:01.666618 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" event={"ID":"d6cc744e-e212-4893-9fcb-60a835f3d83d","Type":"ContainerStarted","Data":"1d19a0cf4e88208011e711e6b62175b91815bf8635d934b36fd7e76201d6fd98"} Jan 21 11:15:01 crc kubenswrapper[4745]: I0121 11:15:01.668122 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" event={"ID":"a051be73-e1d2-4233-8da1-847120a2fe1b","Type":"ContainerStarted","Data":"e76785ded5056ba9a95cb0479ba151abfe8f8b2c1ee98782f5d950cbdf2d1adb"} Jan 21 11:15:01 crc kubenswrapper[4745]: I0121 11:15:01.691915 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" podStartSLOduration=1.691894573 podStartE2EDuration="1.691894573s" podCreationTimestamp="2026-01-21 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:15:01.690123565 +0000 UTC m=+2286.150911153" watchObservedRunningTime="2026-01-21 11:15:01.691894573 +0000 UTC m=+2286.152682171" Jan 21 11:15:02 crc kubenswrapper[4745]: I0121 11:15:02.681789 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" event={"ID":"a051be73-e1d2-4233-8da1-847120a2fe1b","Type":"ContainerStarted","Data":"dbaf94b77fc76c640f4d298457fb5448b9084340dd06f253377417ea53ae1c6f"} Jan 21 11:15:02 crc kubenswrapper[4745]: I0121 11:15:02.688920 4745 generic.go:334] "Generic (PLEG): container finished" podID="d6cc744e-e212-4893-9fcb-60a835f3d83d" containerID="ef6c8a271cbdb56df63572886515e9549dd928f532da9240c148b8a04273966a" exitCode=0 Jan 21 11:15:02 crc kubenswrapper[4745]: I0121 11:15:02.689016 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" event={"ID":"d6cc744e-e212-4893-9fcb-60a835f3d83d","Type":"ContainerDied","Data":"ef6c8a271cbdb56df63572886515e9549dd928f532da9240c148b8a04273966a"} Jan 21 11:15:02 crc kubenswrapper[4745]: I0121 11:15:02.714717 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" podStartSLOduration=2.782827728 podStartE2EDuration="3.714688003s" podCreationTimestamp="2026-01-21 11:14:59 +0000 UTC" firstStartedPulling="2026-01-21 11:15:00.807793164 +0000 UTC m=+2285.268580762" lastFinishedPulling="2026-01-21 11:15:01.739653439 +0000 UTC m=+2286.200441037" observedRunningTime="2026-01-21 11:15:02.708639417 +0000 UTC m=+2287.169427015" watchObservedRunningTime="2026-01-21 11:15:02.714688003 +0000 UTC m=+2287.175475601" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.105231 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.204444 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume\") pod \"d6cc744e-e212-4893-9fcb-60a835f3d83d\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.204547 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5z9\" (UniqueName: \"kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9\") pod \"d6cc744e-e212-4893-9fcb-60a835f3d83d\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.204771 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume\") pod \"d6cc744e-e212-4893-9fcb-60a835f3d83d\" (UID: \"d6cc744e-e212-4893-9fcb-60a835f3d83d\") " Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.206168 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume" (OuterVolumeSpecName: "config-volume") pod "d6cc744e-e212-4893-9fcb-60a835f3d83d" (UID: "d6cc744e-e212-4893-9fcb-60a835f3d83d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.213862 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d6cc744e-e212-4893-9fcb-60a835f3d83d" (UID: "d6cc744e-e212-4893-9fcb-60a835f3d83d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.216159 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9" (OuterVolumeSpecName: "kube-api-access-8z5z9") pod "d6cc744e-e212-4893-9fcb-60a835f3d83d" (UID: "d6cc744e-e212-4893-9fcb-60a835f3d83d"). InnerVolumeSpecName "kube-api-access-8z5z9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.308136 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d6cc744e-e212-4893-9fcb-60a835f3d83d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.308402 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d6cc744e-e212-4893-9fcb-60a835f3d83d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.308505 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z5z9\" (UniqueName: \"kubernetes.io/projected/d6cc744e-e212-4893-9fcb-60a835f3d83d-kube-api-access-8z5z9\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.722462 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" event={"ID":"d6cc744e-e212-4893-9fcb-60a835f3d83d","Type":"ContainerDied","Data":"1d19a0cf4e88208011e711e6b62175b91815bf8635d934b36fd7e76201d6fd98"} Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.723742 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d19a0cf4e88208011e711e6b62175b91815bf8635d934b36fd7e76201d6fd98" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.722690 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2" Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.816811 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2"] Jan 21 11:15:04 crc kubenswrapper[4745]: I0121 11:15:04.828730 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483190-5pfx2"] Jan 21 11:15:06 crc kubenswrapper[4745]: I0121 11:15:06.013960 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1c4364e-4898-4cd5-9ac7-9c800820e244" path="/var/lib/kubelet/pods/e1c4364e-4898-4cd5-9ac7-9c800820e244/volumes" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.792155 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:07 crc kubenswrapper[4745]: E0121 11:15:07.793333 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cc744e-e212-4893-9fcb-60a835f3d83d" containerName="collect-profiles" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.793748 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cc744e-e212-4893-9fcb-60a835f3d83d" containerName="collect-profiles" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.794057 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cc744e-e212-4893-9fcb-60a835f3d83d" containerName="collect-profiles" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.795756 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.806744 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttt7s\" (UniqueName: \"kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.807461 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.807522 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.816884 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.910581 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttt7s\" (UniqueName: \"kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.910901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.910924 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.911470 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.911667 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:07 crc kubenswrapper[4745]: I0121 11:15:07.934422 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttt7s\" (UniqueName: \"kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s\") pod \"community-operators-f5q9t\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:08 crc kubenswrapper[4745]: I0121 11:15:08.128178 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:08 crc kubenswrapper[4745]: I0121 11:15:08.778318 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:09 crc kubenswrapper[4745]: I0121 11:15:09.791070 4745 generic.go:334] "Generic (PLEG): container finished" podID="256b6c22-930d-459c-95e6-6a7af2155176" containerID="0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb" exitCode=0 Jan 21 11:15:09 crc kubenswrapper[4745]: I0121 11:15:09.791182 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerDied","Data":"0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb"} Jan 21 11:15:09 crc kubenswrapper[4745]: I0121 11:15:09.791641 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerStarted","Data":"e1f49559a8d692c9b69c7cc35a1d1ec932fd2dace20b5180be4914a8ec6a65ae"} Jan 21 11:15:10 crc kubenswrapper[4745]: I0121 11:15:10.807471 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerStarted","Data":"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9"} Jan 21 11:15:11 crc kubenswrapper[4745]: I0121 11:15:11.821504 4745 generic.go:334] "Generic (PLEG): container finished" podID="256b6c22-930d-459c-95e6-6a7af2155176" containerID="a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9" exitCode=0 Jan 21 11:15:11 crc kubenswrapper[4745]: I0121 11:15:11.821723 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerDied","Data":"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9"} Jan 21 11:15:13 crc kubenswrapper[4745]: I0121 11:15:13.841070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerStarted","Data":"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59"} Jan 21 11:15:13 crc kubenswrapper[4745]: I0121 11:15:13.866414 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f5q9t" podStartSLOduration=3.466225594 podStartE2EDuration="6.866375328s" podCreationTimestamp="2026-01-21 11:15:07 +0000 UTC" firstStartedPulling="2026-01-21 11:15:09.796038241 +0000 UTC m=+2294.256825839" lastFinishedPulling="2026-01-21 11:15:13.196187975 +0000 UTC m=+2297.656975573" observedRunningTime="2026-01-21 11:15:13.858443182 +0000 UTC m=+2298.319230780" watchObservedRunningTime="2026-01-21 11:15:13.866375328 +0000 UTC m=+2298.327162916" Jan 21 11:15:18 crc kubenswrapper[4745]: I0121 11:15:18.129458 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:18 crc kubenswrapper[4745]: I0121 11:15:18.130228 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:18 crc kubenswrapper[4745]: I0121 11:15:18.186288 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:18 crc kubenswrapper[4745]: I0121 11:15:18.943186 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:18 crc kubenswrapper[4745]: I0121 11:15:18.994694 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:20 crc kubenswrapper[4745]: I0121 11:15:20.905691 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f5q9t" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="registry-server" containerID="cri-o://1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59" gracePeriod=2 Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.405723 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.475347 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities\") pod \"256b6c22-930d-459c-95e6-6a7af2155176\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.475695 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttt7s\" (UniqueName: \"kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s\") pod \"256b6c22-930d-459c-95e6-6a7af2155176\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.475778 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content\") pod \"256b6c22-930d-459c-95e6-6a7af2155176\" (UID: \"256b6c22-930d-459c-95e6-6a7af2155176\") " Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.476272 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities" (OuterVolumeSpecName: "utilities") pod "256b6c22-930d-459c-95e6-6a7af2155176" (UID: "256b6c22-930d-459c-95e6-6a7af2155176"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.476413 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.487869 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s" (OuterVolumeSpecName: "kube-api-access-ttt7s") pod "256b6c22-930d-459c-95e6-6a7af2155176" (UID: "256b6c22-930d-459c-95e6-6a7af2155176"). InnerVolumeSpecName "kube-api-access-ttt7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.532265 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "256b6c22-930d-459c-95e6-6a7af2155176" (UID: "256b6c22-930d-459c-95e6-6a7af2155176"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.578698 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttt7s\" (UniqueName: \"kubernetes.io/projected/256b6c22-930d-459c-95e6-6a7af2155176-kube-api-access-ttt7s\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.578750 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/256b6c22-930d-459c-95e6-6a7af2155176-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.917784 4745 generic.go:334] "Generic (PLEG): container finished" podID="256b6c22-930d-459c-95e6-6a7af2155176" containerID="1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59" exitCode=0 Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.917841 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerDied","Data":"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59"} Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.918163 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f5q9t" event={"ID":"256b6c22-930d-459c-95e6-6a7af2155176","Type":"ContainerDied","Data":"e1f49559a8d692c9b69c7cc35a1d1ec932fd2dace20b5180be4914a8ec6a65ae"} Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.918192 4745 scope.go:117] "RemoveContainer" containerID="1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.917881 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f5q9t" Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.966809 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.979416 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f5q9t"] Jan 21 11:15:21 crc kubenswrapper[4745]: I0121 11:15:21.982435 4745 scope.go:117] "RemoveContainer" containerID="a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.006080 4745 scope.go:117] "RemoveContainer" containerID="0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.031627 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="256b6c22-930d-459c-95e6-6a7af2155176" path="/var/lib/kubelet/pods/256b6c22-930d-459c-95e6-6a7af2155176/volumes" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.070863 4745 scope.go:117] "RemoveContainer" containerID="1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59" Jan 21 11:15:22 crc kubenswrapper[4745]: E0121 11:15:22.071618 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59\": container with ID starting with 1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59 not found: ID does not exist" containerID="1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.071666 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59"} err="failed to get container status \"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59\": rpc error: code = NotFound desc = could not find container \"1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59\": container with ID starting with 1ec8a6bc33af9457f1a8922af01fd7daf83b1aa450d85219575e21bcfbf40b59 not found: ID does not exist" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.071696 4745 scope.go:117] "RemoveContainer" containerID="a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9" Jan 21 11:15:22 crc kubenswrapper[4745]: E0121 11:15:22.072028 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9\": container with ID starting with a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9 not found: ID does not exist" containerID="a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.072101 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9"} err="failed to get container status \"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9\": rpc error: code = NotFound desc = could not find container \"a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9\": container with ID starting with a34017e1928e0f6e0e0fde33b2179c6540461f2c0c8d9ea57e331a53ac59cfa9 not found: ID does not exist" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.072143 4745 scope.go:117] "RemoveContainer" containerID="0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb" Jan 21 11:15:22 crc kubenswrapper[4745]: E0121 11:15:22.072578 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb\": container with ID starting with 0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb not found: ID does not exist" containerID="0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb" Jan 21 11:15:22 crc kubenswrapper[4745]: I0121 11:15:22.072626 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb"} err="failed to get container status \"0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb\": rpc error: code = NotFound desc = could not find container \"0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb\": container with ID starting with 0e46ec5fe251e2d2978d1c668932face3bf7ffccf1fd32b34e1e93fdb2942fcb not found: ID does not exist" Jan 21 11:15:43 crc kubenswrapper[4745]: I0121 11:15:43.144986 4745 generic.go:334] "Generic (PLEG): container finished" podID="a051be73-e1d2-4233-8da1-847120a2fe1b" containerID="dbaf94b77fc76c640f4d298457fb5448b9084340dd06f253377417ea53ae1c6f" exitCode=0 Jan 21 11:15:43 crc kubenswrapper[4745]: I0121 11:15:43.145083 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" event={"ID":"a051be73-e1d2-4233-8da1-847120a2fe1b","Type":"ContainerDied","Data":"dbaf94b77fc76c640f4d298457fb5448b9084340dd06f253377417ea53ae1c6f"} Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.673454 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.735580 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.735728 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.735766 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.735955 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.735980 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736012 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736051 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736074 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736099 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jfb\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736131 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736177 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736254 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736345 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.736372 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle\") pod \"a051be73-e1d2-4233-8da1-847120a2fe1b\" (UID: \"a051be73-e1d2-4233-8da1-847120a2fe1b\") " Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.748712 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.748957 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.749783 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.750754 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.751904 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.753367 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.753466 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.755942 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.759138 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.760507 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.769735 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.780960 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb" (OuterVolumeSpecName: "kube-api-access-w8jfb") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "kube-api-access-w8jfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.788380 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory" (OuterVolumeSpecName: "inventory") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.793021 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a051be73-e1d2-4233-8da1-847120a2fe1b" (UID: "a051be73-e1d2-4233-8da1-847120a2fe1b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.839653 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.840564 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.840707 4745 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.840816 4745 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.840927 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.841036 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jfb\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-kube-api-access-w8jfb\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.841146 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.841273 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.841386 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.841501 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a051be73-e1d2-4233-8da1-847120a2fe1b-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.843186 4745 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.843320 4745 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.843435 4745 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:44 crc kubenswrapper[4745]: I0121 11:15:44.843686 4745 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a051be73-e1d2-4233-8da1-847120a2fe1b-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.177410 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" event={"ID":"a051be73-e1d2-4233-8da1-847120a2fe1b","Type":"ContainerDied","Data":"e76785ded5056ba9a95cb0479ba151abfe8f8b2c1ee98782f5d950cbdf2d1adb"} Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.177490 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e76785ded5056ba9a95cb0479ba151abfe8f8b2c1ee98782f5d950cbdf2d1adb" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.177623 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-njk5j" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.322587 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7"] Jan 21 11:15:45 crc kubenswrapper[4745]: E0121 11:15:45.323309 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="extract-content" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323331 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="extract-content" Jan 21 11:15:45 crc kubenswrapper[4745]: E0121 11:15:45.323349 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="extract-utilities" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323358 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="extract-utilities" Jan 21 11:15:45 crc kubenswrapper[4745]: E0121 11:15:45.323391 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a051be73-e1d2-4233-8da1-847120a2fe1b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323402 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a051be73-e1d2-4233-8da1-847120a2fe1b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:15:45 crc kubenswrapper[4745]: E0121 11:15:45.323485 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="registry-server" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323498 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="registry-server" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323792 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="256b6c22-930d-459c-95e6-6a7af2155176" containerName="registry-server" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.323820 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a051be73-e1d2-4233-8da1-847120a2fe1b" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.325189 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.329066 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.329364 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.329521 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.331153 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.331220 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.365892 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7"] Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.472868 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.472953 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.473156 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.473362 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.473500 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm79d\" (UniqueName: \"kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.577103 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.577212 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.577278 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.577330 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.577365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm79d\" (UniqueName: \"kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.578682 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.583508 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.583548 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.585137 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.597337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm79d\" (UniqueName: \"kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-65fm7\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:45 crc kubenswrapper[4745]: I0121 11:15:45.648814 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:15:46 crc kubenswrapper[4745]: I0121 11:15:46.279752 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7"] Jan 21 11:15:46 crc kubenswrapper[4745]: I0121 11:15:46.296510 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:15:47 crc kubenswrapper[4745]: I0121 11:15:47.198695 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" event={"ID":"7e88dbcd-044a-4c58-8069-54de2ea049c0","Type":"ContainerStarted","Data":"95ad409bd8505f81a97134704f41e2fdf9ec0a5ad2c161110ce267c571196f09"} Jan 21 11:15:47 crc kubenswrapper[4745]: I0121 11:15:47.199139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" event={"ID":"7e88dbcd-044a-4c58-8069-54de2ea049c0","Type":"ContainerStarted","Data":"8fc97968fb0380412d75ac28b2560bfa170f9f0bbacad0935c937b622185bd68"} Jan 21 11:15:47 crc kubenswrapper[4745]: I0121 11:15:47.231980 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" podStartSLOduration=1.8404314130000001 podStartE2EDuration="2.231947511s" podCreationTimestamp="2026-01-21 11:15:45 +0000 UTC" firstStartedPulling="2026-01-21 11:15:46.296256772 +0000 UTC m=+2330.757044360" lastFinishedPulling="2026-01-21 11:15:46.68777286 +0000 UTC m=+2331.148560458" observedRunningTime="2026-01-21 11:15:47.229105803 +0000 UTC m=+2331.689893401" watchObservedRunningTime="2026-01-21 11:15:47.231947511 +0000 UTC m=+2331.692735099" Jan 21 11:15:55 crc kubenswrapper[4745]: I0121 11:15:55.053290 4745 scope.go:117] "RemoveContainer" containerID="2f5f2ac464f8a0752429ee2e88b11be2441b5c21280fb92d81d31fc9b4b23321" Jan 21 11:16:15 crc kubenswrapper[4745]: I0121 11:16:15.867794 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:16:15 crc kubenswrapper[4745]: I0121 11:16:15.868570 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:16:45 crc kubenswrapper[4745]: I0121 11:16:45.866423 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:16:45 crc kubenswrapper[4745]: I0121 11:16:45.867407 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:16:59 crc kubenswrapper[4745]: I0121 11:16:59.009863 4745 generic.go:334] "Generic (PLEG): container finished" podID="7e88dbcd-044a-4c58-8069-54de2ea049c0" containerID="95ad409bd8505f81a97134704f41e2fdf9ec0a5ad2c161110ce267c571196f09" exitCode=0 Jan 21 11:16:59 crc kubenswrapper[4745]: I0121 11:16:59.009956 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" event={"ID":"7e88dbcd-044a-4c58-8069-54de2ea049c0","Type":"ContainerDied","Data":"95ad409bd8505f81a97134704f41e2fdf9ec0a5ad2c161110ce267c571196f09"} Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.465217 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.627037 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0\") pod \"7e88dbcd-044a-4c58-8069-54de2ea049c0\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.627154 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam\") pod \"7e88dbcd-044a-4c58-8069-54de2ea049c0\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.627178 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle\") pod \"7e88dbcd-044a-4c58-8069-54de2ea049c0\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.627281 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mm79d\" (UniqueName: \"kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d\") pod \"7e88dbcd-044a-4c58-8069-54de2ea049c0\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.627388 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory\") pod \"7e88dbcd-044a-4c58-8069-54de2ea049c0\" (UID: \"7e88dbcd-044a-4c58-8069-54de2ea049c0\") " Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.634911 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7e88dbcd-044a-4c58-8069-54de2ea049c0" (UID: "7e88dbcd-044a-4c58-8069-54de2ea049c0"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.635191 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d" (OuterVolumeSpecName: "kube-api-access-mm79d") pod "7e88dbcd-044a-4c58-8069-54de2ea049c0" (UID: "7e88dbcd-044a-4c58-8069-54de2ea049c0"). InnerVolumeSpecName "kube-api-access-mm79d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.659185 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7e88dbcd-044a-4c58-8069-54de2ea049c0" (UID: "7e88dbcd-044a-4c58-8069-54de2ea049c0"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.664997 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7e88dbcd-044a-4c58-8069-54de2ea049c0" (UID: "7e88dbcd-044a-4c58-8069-54de2ea049c0"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.666373 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory" (OuterVolumeSpecName: "inventory") pod "7e88dbcd-044a-4c58-8069-54de2ea049c0" (UID: "7e88dbcd-044a-4c58-8069-54de2ea049c0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.729726 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mm79d\" (UniqueName: \"kubernetes.io/projected/7e88dbcd-044a-4c58-8069-54de2ea049c0-kube-api-access-mm79d\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.729773 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.729784 4745 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.729794 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:00 crc kubenswrapper[4745]: I0121 11:17:00.729807 4745 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e88dbcd-044a-4c58-8069-54de2ea049c0-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.046787 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" event={"ID":"7e88dbcd-044a-4c58-8069-54de2ea049c0","Type":"ContainerDied","Data":"8fc97968fb0380412d75ac28b2560bfa170f9f0bbacad0935c937b622185bd68"} Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.047281 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fc97968fb0380412d75ac28b2560bfa170f9f0bbacad0935c937b622185bd68" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.046871 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-65fm7" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.216689 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x"] Jan 21 11:17:01 crc kubenswrapper[4745]: E0121 11:17:01.217769 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e88dbcd-044a-4c58-8069-54de2ea049c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.217858 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e88dbcd-044a-4c58-8069-54de2ea049c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.218140 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e88dbcd-044a-4c58-8069-54de2ea049c0" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.219155 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.224723 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.227253 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.228286 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.228449 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.233371 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.243262 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x"] Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.244660 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348359 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348469 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbf2x\" (UniqueName: \"kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348534 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348610 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348633 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.348689 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.451073 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.452566 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbf2x\" (UniqueName: \"kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.452724 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.452818 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.452914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.453058 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.462577 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.467798 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.475941 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.479324 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.501194 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.508672 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbf2x\" (UniqueName: \"kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:01 crc kubenswrapper[4745]: I0121 11:17:01.542229 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:02 crc kubenswrapper[4745]: I0121 11:17:02.175356 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x"] Jan 21 11:17:03 crc kubenswrapper[4745]: I0121 11:17:03.076865 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" event={"ID":"200916d8-adce-4f77-b2c2-44be9da69f65","Type":"ContainerStarted","Data":"67c0a09e2bb93f41767d3c50b807446a2b757eb7a8562026568b814648920084"} Jan 21 11:17:03 crc kubenswrapper[4745]: I0121 11:17:03.078716 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" event={"ID":"200916d8-adce-4f77-b2c2-44be9da69f65","Type":"ContainerStarted","Data":"fcbb287cbcd16596f23a7a3f26eb4aca5b27057f07d756d4f79db2368b6d0fad"} Jan 21 11:17:03 crc kubenswrapper[4745]: I0121 11:17:03.107443 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" podStartSLOduration=1.599961022 podStartE2EDuration="2.107413839s" podCreationTimestamp="2026-01-21 11:17:01 +0000 UTC" firstStartedPulling="2026-01-21 11:17:02.184722765 +0000 UTC m=+2406.645510363" lastFinishedPulling="2026-01-21 11:17:02.692175582 +0000 UTC m=+2407.152963180" observedRunningTime="2026-01-21 11:17:03.097401225 +0000 UTC m=+2407.558188813" watchObservedRunningTime="2026-01-21 11:17:03.107413839 +0000 UTC m=+2407.568201437" Jan 21 11:17:15 crc kubenswrapper[4745]: I0121 11:17:15.866252 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:17:15 crc kubenswrapper[4745]: I0121 11:17:15.867012 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:17:15 crc kubenswrapper[4745]: I0121 11:17:15.867075 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:17:15 crc kubenswrapper[4745]: I0121 11:17:15.868246 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:17:15 crc kubenswrapper[4745]: I0121 11:17:15.868323 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" gracePeriod=600 Jan 21 11:17:15 crc kubenswrapper[4745]: E0121 11:17:15.998515 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:17:16 crc kubenswrapper[4745]: I0121 11:17:16.218256 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" exitCode=0 Jan 21 11:17:16 crc kubenswrapper[4745]: I0121 11:17:16.218308 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628"} Jan 21 11:17:16 crc kubenswrapper[4745]: I0121 11:17:16.218345 4745 scope.go:117] "RemoveContainer" containerID="e1d844781c026bf555dfea0465014abdaecf9057a245267ab02f1183d1d50d0a" Jan 21 11:17:16 crc kubenswrapper[4745]: I0121 11:17:16.219293 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:17:16 crc kubenswrapper[4745]: E0121 11:17:16.219794 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:17:30 crc kubenswrapper[4745]: I0121 11:17:30.000943 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:17:30 crc kubenswrapper[4745]: E0121 11:17:30.001850 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:17:45 crc kubenswrapper[4745]: I0121 11:17:45.000523 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:17:45 crc kubenswrapper[4745]: E0121 11:17:45.001489 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:17:57 crc kubenswrapper[4745]: I0121 11:17:57.693498 4745 generic.go:334] "Generic (PLEG): container finished" podID="200916d8-adce-4f77-b2c2-44be9da69f65" containerID="67c0a09e2bb93f41767d3c50b807446a2b757eb7a8562026568b814648920084" exitCode=0 Jan 21 11:17:57 crc kubenswrapper[4745]: I0121 11:17:57.693710 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" event={"ID":"200916d8-adce-4f77-b2c2-44be9da69f65","Type":"ContainerDied","Data":"67c0a09e2bb93f41767d3c50b807446a2b757eb7a8562026568b814648920084"} Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.001796 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:17:59 crc kubenswrapper[4745]: E0121 11:17:59.002663 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.211925 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324563 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324595 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324740 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324811 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbf2x\" (UniqueName: \"kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.324932 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle\") pod \"200916d8-adce-4f77-b2c2-44be9da69f65\" (UID: \"200916d8-adce-4f77-b2c2-44be9da69f65\") " Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.333437 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.336845 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x" (OuterVolumeSpecName: "kube-api-access-vbf2x") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "kube-api-access-vbf2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.373844 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.374081 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.376735 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.409204 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory" (OuterVolumeSpecName: "inventory") pod "200916d8-adce-4f77-b2c2-44be9da69f65" (UID: "200916d8-adce-4f77-b2c2-44be9da69f65"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.427294 4745 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.428265 4745 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.428408 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.428477 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbf2x\" (UniqueName: \"kubernetes.io/projected/200916d8-adce-4f77-b2c2-44be9da69f65-kube-api-access-vbf2x\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.428566 4745 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.428649 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/200916d8-adce-4f77-b2c2-44be9da69f65-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.719215 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" event={"ID":"200916d8-adce-4f77-b2c2-44be9da69f65","Type":"ContainerDied","Data":"fcbb287cbcd16596f23a7a3f26eb4aca5b27057f07d756d4f79db2368b6d0fad"} Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.719265 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbb287cbcd16596f23a7a3f26eb4aca5b27057f07d756d4f79db2368b6d0fad" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.719287 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.845026 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5"] Jan 21 11:17:59 crc kubenswrapper[4745]: E0121 11:17:59.845442 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200916d8-adce-4f77-b2c2-44be9da69f65" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.845463 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="200916d8-adce-4f77-b2c2-44be9da69f65" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.845682 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="200916d8-adce-4f77-b2c2-44be9da69f65" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.846336 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.848984 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.849171 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.849334 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.849463 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.849634 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.864695 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5"] Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.939626 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.939847 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.940168 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.940223 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx449\" (UniqueName: \"kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:17:59 crc kubenswrapper[4745]: I0121 11:17:59.940487 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.042505 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.042622 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jx449\" (UniqueName: \"kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.042740 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.042819 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.042959 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.047952 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.048850 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.049345 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.050600 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.068590 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jx449\" (UniqueName: \"kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.167850 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:18:00 crc kubenswrapper[4745]: I0121 11:18:00.810456 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5"] Jan 21 11:18:01 crc kubenswrapper[4745]: I0121 11:18:01.741045 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" event={"ID":"8df35d83-d69d-4747-b617-9ef2be130951","Type":"ContainerStarted","Data":"0a86f006a66f7d74dc4c65f9af12bb7ef947c0b045b2f2bc348af37366526062"} Jan 21 11:18:01 crc kubenswrapper[4745]: I0121 11:18:01.741439 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" event={"ID":"8df35d83-d69d-4747-b617-9ef2be130951","Type":"ContainerStarted","Data":"b4e516f8565611dc053e6f86bab14cb2d509f9193b4a226d9c90f6d16c0a9813"} Jan 21 11:18:01 crc kubenswrapper[4745]: I0121 11:18:01.762876 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" podStartSLOduration=2.378686816 podStartE2EDuration="2.762856394s" podCreationTimestamp="2026-01-21 11:17:59 +0000 UTC" firstStartedPulling="2026-01-21 11:18:00.843860251 +0000 UTC m=+2465.304647849" lastFinishedPulling="2026-01-21 11:18:01.228029789 +0000 UTC m=+2465.688817427" observedRunningTime="2026-01-21 11:18:01.761544588 +0000 UTC m=+2466.222332186" watchObservedRunningTime="2026-01-21 11:18:01.762856394 +0000 UTC m=+2466.223643992" Jan 21 11:18:10 crc kubenswrapper[4745]: I0121 11:18:10.000874 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:18:10 crc kubenswrapper[4745]: E0121 11:18:10.001755 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:18:21 crc kubenswrapper[4745]: I0121 11:18:21.001176 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:18:21 crc kubenswrapper[4745]: E0121 11:18:21.002992 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:18:33 crc kubenswrapper[4745]: I0121 11:18:33.000938 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:18:33 crc kubenswrapper[4745]: E0121 11:18:33.002333 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:18:48 crc kubenswrapper[4745]: I0121 11:18:48.000242 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:18:48 crc kubenswrapper[4745]: E0121 11:18:48.002027 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:18:59 crc kubenswrapper[4745]: I0121 11:18:59.000654 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:18:59 crc kubenswrapper[4745]: E0121 11:18:59.001924 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:19:11 crc kubenswrapper[4745]: I0121 11:19:11.001161 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:19:11 crc kubenswrapper[4745]: E0121 11:19:11.002406 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:19:24 crc kubenswrapper[4745]: I0121 11:19:24.001660 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:19:24 crc kubenswrapper[4745]: E0121 11:19:24.005797 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:19:39 crc kubenswrapper[4745]: I0121 11:19:39.000192 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:19:39 crc kubenswrapper[4745]: E0121 11:19:39.001171 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:19:53 crc kubenswrapper[4745]: I0121 11:19:53.000209 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:19:53 crc kubenswrapper[4745]: E0121 11:19:53.001173 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:20:05 crc kubenswrapper[4745]: I0121 11:20:05.005512 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:20:05 crc kubenswrapper[4745]: E0121 11:20:05.007107 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:20:18 crc kubenswrapper[4745]: I0121 11:20:18.001867 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:20:18 crc kubenswrapper[4745]: E0121 11:20:18.003004 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:20:29 crc kubenswrapper[4745]: I0121 11:20:29.000802 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:20:29 crc kubenswrapper[4745]: E0121 11:20:29.003224 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:20:44 crc kubenswrapper[4745]: I0121 11:20:44.000992 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:20:44 crc kubenswrapper[4745]: E0121 11:20:44.002237 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:20:56 crc kubenswrapper[4745]: I0121 11:20:56.012685 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:20:56 crc kubenswrapper[4745]: E0121 11:20:56.014992 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:21:11 crc kubenswrapper[4745]: I0121 11:21:11.000936 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:21:11 crc kubenswrapper[4745]: E0121 11:21:11.001988 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:21:23 crc kubenswrapper[4745]: I0121 11:21:23.000235 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:21:23 crc kubenswrapper[4745]: E0121 11:21:23.001395 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:21:36 crc kubenswrapper[4745]: I0121 11:21:36.007473 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:21:36 crc kubenswrapper[4745]: E0121 11:21:36.008667 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:21:51 crc kubenswrapper[4745]: I0121 11:21:51.001169 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:21:51 crc kubenswrapper[4745]: E0121 11:21:51.002405 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:22:05 crc kubenswrapper[4745]: I0121 11:22:05.001489 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:22:05 crc kubenswrapper[4745]: E0121 11:22:05.003161 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:22:18 crc kubenswrapper[4745]: I0121 11:22:18.001434 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:22:18 crc kubenswrapper[4745]: I0121 11:22:18.871515 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c"} Jan 21 11:22:51 crc kubenswrapper[4745]: I0121 11:22:51.229682 4745 generic.go:334] "Generic (PLEG): container finished" podID="8df35d83-d69d-4747-b617-9ef2be130951" containerID="0a86f006a66f7d74dc4c65f9af12bb7ef947c0b045b2f2bc348af37366526062" exitCode=0 Jan 21 11:22:51 crc kubenswrapper[4745]: I0121 11:22:51.229878 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" event={"ID":"8df35d83-d69d-4747-b617-9ef2be130951","Type":"ContainerDied","Data":"0a86f006a66f7d74dc4c65f9af12bb7ef947c0b045b2f2bc348af37366526062"} Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.756245 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.929407 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0\") pod \"8df35d83-d69d-4747-b617-9ef2be130951\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.929581 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle\") pod \"8df35d83-d69d-4747-b617-9ef2be130951\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.929612 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jx449\" (UniqueName: \"kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449\") pod \"8df35d83-d69d-4747-b617-9ef2be130951\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.929711 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory\") pod \"8df35d83-d69d-4747-b617-9ef2be130951\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.930647 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam\") pod \"8df35d83-d69d-4747-b617-9ef2be130951\" (UID: \"8df35d83-d69d-4747-b617-9ef2be130951\") " Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.940810 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449" (OuterVolumeSpecName: "kube-api-access-jx449") pod "8df35d83-d69d-4747-b617-9ef2be130951" (UID: "8df35d83-d69d-4747-b617-9ef2be130951"). InnerVolumeSpecName "kube-api-access-jx449". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.941158 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8df35d83-d69d-4747-b617-9ef2be130951" (UID: "8df35d83-d69d-4747-b617-9ef2be130951"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.967025 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory" (OuterVolumeSpecName: "inventory") pod "8df35d83-d69d-4747-b617-9ef2be130951" (UID: "8df35d83-d69d-4747-b617-9ef2be130951"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.971846 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8df35d83-d69d-4747-b617-9ef2be130951" (UID: "8df35d83-d69d-4747-b617-9ef2be130951"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:52 crc kubenswrapper[4745]: I0121 11:22:52.972802 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8df35d83-d69d-4747-b617-9ef2be130951" (UID: "8df35d83-d69d-4747-b617-9ef2be130951"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.033071 4745 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.033122 4745 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.033135 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jx449\" (UniqueName: \"kubernetes.io/projected/8df35d83-d69d-4747-b617-9ef2be130951-kube-api-access-jx449\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.033145 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.033154 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8df35d83-d69d-4747-b617-9ef2be130951-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.256099 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" event={"ID":"8df35d83-d69d-4747-b617-9ef2be130951","Type":"ContainerDied","Data":"b4e516f8565611dc053e6f86bab14cb2d509f9193b4a226d9c90f6d16c0a9813"} Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.256173 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e516f8565611dc053e6f86bab14cb2d509f9193b4a226d9c90f6d16c0a9813" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.256242 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.430135 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd"] Jan 21 11:22:53 crc kubenswrapper[4745]: E0121 11:22:53.430797 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df35d83-d69d-4747-b617-9ef2be130951" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.430828 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df35d83-d69d-4747-b617-9ef2be130951" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.434286 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df35d83-d69d-4747-b617-9ef2be130951" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.435487 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.439890 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.440149 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.440428 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.440577 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.440747 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.444340 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.448264 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.455307 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd"] Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.544962 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545048 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56glw\" (UniqueName: \"kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545071 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545137 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545160 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545205 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545275 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.545302 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.647227 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.647667 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.647837 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.647925 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.648011 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56glw\" (UniqueName: \"kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.648091 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.648186 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.648458 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.648600 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.650863 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.655235 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.655517 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.656179 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.657736 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.665888 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.666780 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.668648 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.680490 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56glw\" (UniqueName: \"kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw\") pod \"nova-edpm-deployment-openstack-edpm-ipam-gmznd\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:53 crc kubenswrapper[4745]: I0121 11:22:53.760187 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:22:54 crc kubenswrapper[4745]: I0121 11:22:54.449857 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd"] Jan 21 11:22:54 crc kubenswrapper[4745]: I0121 11:22:54.471358 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:22:55 crc kubenswrapper[4745]: I0121 11:22:55.286728 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" event={"ID":"2fc7129c-3f8a-42cc-baf6-d499c5582e71","Type":"ContainerStarted","Data":"7f540024449e4e5b50a99bfcf33066c46f1dbeeb5b93fd473341a4ea724bc3e4"} Jan 21 11:22:56 crc kubenswrapper[4745]: I0121 11:22:56.298024 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" event={"ID":"2fc7129c-3f8a-42cc-baf6-d499c5582e71","Type":"ContainerStarted","Data":"03bfcf68d53f3e61ea4d6b0fdaa260dee2b84e10a6342e069f816285ec66f3b2"} Jan 21 11:22:56 crc kubenswrapper[4745]: I0121 11:22:56.335968 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" podStartSLOduration=2.5976064340000002 podStartE2EDuration="3.335938766s" podCreationTimestamp="2026-01-21 11:22:53 +0000 UTC" firstStartedPulling="2026-01-21 11:22:54.4693667 +0000 UTC m=+2758.930154298" lastFinishedPulling="2026-01-21 11:22:55.207699032 +0000 UTC m=+2759.668486630" observedRunningTime="2026-01-21 11:22:56.324821503 +0000 UTC m=+2760.785609101" watchObservedRunningTime="2026-01-21 11:22:56.335938766 +0000 UTC m=+2760.796726364" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.629934 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.633582 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.646437 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.808864 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.808926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.809019 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbhlx\" (UniqueName: \"kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.911384 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbhlx\" (UniqueName: \"kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.911592 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.911626 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.912226 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.912249 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.942227 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbhlx\" (UniqueName: \"kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx\") pod \"redhat-operators-bk2q9\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:08 crc kubenswrapper[4745]: I0121 11:23:08.957092 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:09 crc kubenswrapper[4745]: I0121 11:23:09.629786 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:10 crc kubenswrapper[4745]: I0121 11:23:10.491026 4745 generic.go:334] "Generic (PLEG): container finished" podID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerID="5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129" exitCode=0 Jan 21 11:23:10 crc kubenswrapper[4745]: I0121 11:23:10.491113 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerDied","Data":"5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129"} Jan 21 11:23:10 crc kubenswrapper[4745]: I0121 11:23:10.491812 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerStarted","Data":"0c624b9196aa712dbf0d7e80ae6cde8334ce65bce3e9e7b28c4d077af869ef5d"} Jan 21 11:23:12 crc kubenswrapper[4745]: I0121 11:23:12.550904 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerStarted","Data":"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3"} Jan 21 11:23:16 crc kubenswrapper[4745]: I0121 11:23:16.631854 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerDied","Data":"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3"} Jan 21 11:23:16 crc kubenswrapper[4745]: I0121 11:23:16.632392 4745 generic.go:334] "Generic (PLEG): container finished" podID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerID="a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3" exitCode=0 Jan 21 11:23:17 crc kubenswrapper[4745]: I0121 11:23:17.657203 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerStarted","Data":"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1"} Jan 21 11:23:17 crc kubenswrapper[4745]: I0121 11:23:17.705088 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bk2q9" podStartSLOduration=3.052739861 podStartE2EDuration="9.705054312s" podCreationTimestamp="2026-01-21 11:23:08 +0000 UTC" firstStartedPulling="2026-01-21 11:23:10.493505187 +0000 UTC m=+2774.954292785" lastFinishedPulling="2026-01-21 11:23:17.145819638 +0000 UTC m=+2781.606607236" observedRunningTime="2026-01-21 11:23:17.694045444 +0000 UTC m=+2782.154833042" watchObservedRunningTime="2026-01-21 11:23:17.705054312 +0000 UTC m=+2782.165841910" Jan 21 11:23:18 crc kubenswrapper[4745]: I0121 11:23:18.957784 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:18 crc kubenswrapper[4745]: I0121 11:23:18.958335 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:20 crc kubenswrapper[4745]: I0121 11:23:20.020162 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk2q9" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:20 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:20 crc kubenswrapper[4745]: > Jan 21 11:23:30 crc kubenswrapper[4745]: I0121 11:23:30.032887 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bk2q9" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" probeResult="failure" output=< Jan 21 11:23:30 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:23:30 crc kubenswrapper[4745]: > Jan 21 11:23:39 crc kubenswrapper[4745]: I0121 11:23:39.011417 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:39 crc kubenswrapper[4745]: I0121 11:23:39.098705 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:39 crc kubenswrapper[4745]: I0121 11:23:39.854260 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:40 crc kubenswrapper[4745]: I0121 11:23:40.898280 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bk2q9" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" containerID="cri-o://1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1" gracePeriod=2 Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.431231 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.565266 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbhlx\" (UniqueName: \"kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx\") pod \"a400a116-68b2-4344-9ea1-4e129c15f45b\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.565402 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities\") pod \"a400a116-68b2-4344-9ea1-4e129c15f45b\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.565472 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content\") pod \"a400a116-68b2-4344-9ea1-4e129c15f45b\" (UID: \"a400a116-68b2-4344-9ea1-4e129c15f45b\") " Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.566130 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities" (OuterVolumeSpecName: "utilities") pod "a400a116-68b2-4344-9ea1-4e129c15f45b" (UID: "a400a116-68b2-4344-9ea1-4e129c15f45b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.573631 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx" (OuterVolumeSpecName: "kube-api-access-wbhlx") pod "a400a116-68b2-4344-9ea1-4e129c15f45b" (UID: "a400a116-68b2-4344-9ea1-4e129c15f45b"). InnerVolumeSpecName "kube-api-access-wbhlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.672285 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.672330 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbhlx\" (UniqueName: \"kubernetes.io/projected/a400a116-68b2-4344-9ea1-4e129c15f45b-kube-api-access-wbhlx\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.686385 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a400a116-68b2-4344-9ea1-4e129c15f45b" (UID: "a400a116-68b2-4344-9ea1-4e129c15f45b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.773723 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a400a116-68b2-4344-9ea1-4e129c15f45b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.908809 4745 generic.go:334] "Generic (PLEG): container finished" podID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerID="1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1" exitCode=0 Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.908866 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerDied","Data":"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1"} Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.908901 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bk2q9" event={"ID":"a400a116-68b2-4344-9ea1-4e129c15f45b","Type":"ContainerDied","Data":"0c624b9196aa712dbf0d7e80ae6cde8334ce65bce3e9e7b28c4d077af869ef5d"} Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.908922 4745 scope.go:117] "RemoveContainer" containerID="1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.908924 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bk2q9" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.933295 4745 scope.go:117] "RemoveContainer" containerID="a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3" Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.960579 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.980711 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bk2q9"] Jan 21 11:23:41 crc kubenswrapper[4745]: I0121 11:23:41.983176 4745 scope.go:117] "RemoveContainer" containerID="5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.019662 4745 scope.go:117] "RemoveContainer" containerID="1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1" Jan 21 11:23:42 crc kubenswrapper[4745]: E0121 11:23:42.020085 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1\": container with ID starting with 1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1 not found: ID does not exist" containerID="1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.020121 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1"} err="failed to get container status \"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1\": rpc error: code = NotFound desc = could not find container \"1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1\": container with ID starting with 1d8dfc355bfcbd23a3038ffb305d00d8571e37b673a694440ff7f4b8468af8f1 not found: ID does not exist" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.020149 4745 scope.go:117] "RemoveContainer" containerID="a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3" Jan 21 11:23:42 crc kubenswrapper[4745]: E0121 11:23:42.020505 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3\": container with ID starting with a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3 not found: ID does not exist" containerID="a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.020562 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3"} err="failed to get container status \"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3\": rpc error: code = NotFound desc = could not find container \"a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3\": container with ID starting with a18258625a6432bbf5ba52a59bbb0e58ab67f41d6ff52670e79cfe15e64c08b3 not found: ID does not exist" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.020580 4745 scope.go:117] "RemoveContainer" containerID="5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129" Jan 21 11:23:42 crc kubenswrapper[4745]: E0121 11:23:42.020863 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129\": container with ID starting with 5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129 not found: ID does not exist" containerID="5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.020907 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129"} err="failed to get container status \"5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129\": rpc error: code = NotFound desc = could not find container \"5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129\": container with ID starting with 5aae8744623983f06770f343779f775db78901b1eb153498c076ce455dccb129 not found: ID does not exist" Jan 21 11:23:42 crc kubenswrapper[4745]: I0121 11:23:42.027986 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" path="/var/lib/kubelet/pods/a400a116-68b2-4344-9ea1-4e129c15f45b/volumes" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.808253 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:23:53 crc kubenswrapper[4745]: E0121 11:23:53.809518 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="extract-utilities" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.809560 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="extract-utilities" Jan 21 11:23:53 crc kubenswrapper[4745]: E0121 11:23:53.809596 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="extract-content" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.809605 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="extract-content" Jan 21 11:23:53 crc kubenswrapper[4745]: E0121 11:23:53.809623 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.809633 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.809887 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a400a116-68b2-4344-9ea1-4e129c15f45b" containerName="registry-server" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.811663 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.823553 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.942945 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzpwq\" (UniqueName: \"kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.943294 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:53 crc kubenswrapper[4745]: I0121 11:23:53.943675 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.045427 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzpwq\" (UniqueName: \"kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.045620 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.045757 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.046555 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.046868 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.087930 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzpwq\" (UniqueName: \"kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq\") pod \"redhat-marketplace-ss5k6\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.176949 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:23:54 crc kubenswrapper[4745]: I0121 11:23:54.757291 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:23:55 crc kubenswrapper[4745]: I0121 11:23:55.041059 4745 generic.go:334] "Generic (PLEG): container finished" podID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerID="6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442" exitCode=0 Jan 21 11:23:55 crc kubenswrapper[4745]: I0121 11:23:55.041395 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerDied","Data":"6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442"} Jan 21 11:23:55 crc kubenswrapper[4745]: I0121 11:23:55.041517 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerStarted","Data":"359aef90d393bdce6f1021ca03436f33b2eca318b921b91bd971dfd159d9f29c"} Jan 21 11:23:56 crc kubenswrapper[4745]: I0121 11:23:56.073680 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerStarted","Data":"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426"} Jan 21 11:23:57 crc kubenswrapper[4745]: I0121 11:23:57.084774 4745 generic.go:334] "Generic (PLEG): container finished" podID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerID="a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426" exitCode=0 Jan 21 11:23:57 crc kubenswrapper[4745]: I0121 11:23:57.084833 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerDied","Data":"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426"} Jan 21 11:23:58 crc kubenswrapper[4745]: I0121 11:23:58.095119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerStarted","Data":"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758"} Jan 21 11:23:58 crc kubenswrapper[4745]: I0121 11:23:58.125421 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ss5k6" podStartSLOduration=2.68094964 podStartE2EDuration="5.125398171s" podCreationTimestamp="2026-01-21 11:23:53 +0000 UTC" firstStartedPulling="2026-01-21 11:23:55.044360206 +0000 UTC m=+2819.505147804" lastFinishedPulling="2026-01-21 11:23:57.488808737 +0000 UTC m=+2821.949596335" observedRunningTime="2026-01-21 11:23:58.114091464 +0000 UTC m=+2822.574879072" watchObservedRunningTime="2026-01-21 11:23:58.125398171 +0000 UTC m=+2822.586185769" Jan 21 11:24:04 crc kubenswrapper[4745]: I0121 11:24:04.178005 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:04 crc kubenswrapper[4745]: I0121 11:24:04.178464 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:04 crc kubenswrapper[4745]: I0121 11:24:04.267263 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:05 crc kubenswrapper[4745]: I0121 11:24:05.209853 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:05 crc kubenswrapper[4745]: I0121 11:24:05.273201 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:24:07 crc kubenswrapper[4745]: I0121 11:24:07.190876 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ss5k6" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="registry-server" containerID="cri-o://11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758" gracePeriod=2 Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.207200 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.209336 4745 generic.go:334] "Generic (PLEG): container finished" podID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerID="11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758" exitCode=0 Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.209432 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerDied","Data":"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758"} Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.209519 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ss5k6" event={"ID":"b252b476-c688-49b2-bf33-c1c7b4147fcd","Type":"ContainerDied","Data":"359aef90d393bdce6f1021ca03436f33b2eca318b921b91bd971dfd159d9f29c"} Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.209566 4745 scope.go:117] "RemoveContainer" containerID="11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.259628 4745 scope.go:117] "RemoveContainer" containerID="a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.308302 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities\") pod \"b252b476-c688-49b2-bf33-c1c7b4147fcd\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.309650 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities" (OuterVolumeSpecName: "utilities") pod "b252b476-c688-49b2-bf33-c1c7b4147fcd" (UID: "b252b476-c688-49b2-bf33-c1c7b4147fcd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.309703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzpwq\" (UniqueName: \"kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq\") pod \"b252b476-c688-49b2-bf33-c1c7b4147fcd\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.309852 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content\") pod \"b252b476-c688-49b2-bf33-c1c7b4147fcd\" (UID: \"b252b476-c688-49b2-bf33-c1c7b4147fcd\") " Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.319499 4745 scope.go:117] "RemoveContainer" containerID="6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.321877 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.334202 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq" (OuterVolumeSpecName: "kube-api-access-zzpwq") pod "b252b476-c688-49b2-bf33-c1c7b4147fcd" (UID: "b252b476-c688-49b2-bf33-c1c7b4147fcd"). InnerVolumeSpecName "kube-api-access-zzpwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.346601 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b252b476-c688-49b2-bf33-c1c7b4147fcd" (UID: "b252b476-c688-49b2-bf33-c1c7b4147fcd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.389154 4745 scope.go:117] "RemoveContainer" containerID="11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758" Jan 21 11:24:08 crc kubenswrapper[4745]: E0121 11:24:08.389954 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758\": container with ID starting with 11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758 not found: ID does not exist" containerID="11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.390020 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758"} err="failed to get container status \"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758\": rpc error: code = NotFound desc = could not find container \"11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758\": container with ID starting with 11abd42aa39f3c0725c732d3bbd3c4547d96e88d0e40b0278edea59c92ab6758 not found: ID does not exist" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.390076 4745 scope.go:117] "RemoveContainer" containerID="a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426" Jan 21 11:24:08 crc kubenswrapper[4745]: E0121 11:24:08.390410 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426\": container with ID starting with a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426 not found: ID does not exist" containerID="a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.390435 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426"} err="failed to get container status \"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426\": rpc error: code = NotFound desc = could not find container \"a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426\": container with ID starting with a2a15634a2d38f784e580945a4ae5780b074062fac1258cf6b720bb3f45bb426 not found: ID does not exist" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.390452 4745 scope.go:117] "RemoveContainer" containerID="6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442" Jan 21 11:24:08 crc kubenswrapper[4745]: E0121 11:24:08.390881 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442\": container with ID starting with 6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442 not found: ID does not exist" containerID="6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.390906 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442"} err="failed to get container status \"6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442\": rpc error: code = NotFound desc = could not find container \"6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442\": container with ID starting with 6c4188bc3d24fa699b3106604225a54b768bdfbbfa870a8a26da108fb5855442 not found: ID does not exist" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.424984 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzpwq\" (UniqueName: \"kubernetes.io/projected/b252b476-c688-49b2-bf33-c1c7b4147fcd-kube-api-access-zzpwq\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:08 crc kubenswrapper[4745]: I0121 11:24:08.425457 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b252b476-c688-49b2-bf33-c1c7b4147fcd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:09 crc kubenswrapper[4745]: I0121 11:24:09.224026 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ss5k6" Jan 21 11:24:09 crc kubenswrapper[4745]: I0121 11:24:09.263016 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:24:09 crc kubenswrapper[4745]: I0121 11:24:09.272701 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ss5k6"] Jan 21 11:24:10 crc kubenswrapper[4745]: I0121 11:24:10.013935 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" path="/var/lib/kubelet/pods/b252b476-c688-49b2-bf33-c1c7b4147fcd/volumes" Jan 21 11:24:35 crc kubenswrapper[4745]: I0121 11:24:35.988516 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:35 crc kubenswrapper[4745]: E0121 11:24:35.990177 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="extract-utilities" Jan 21 11:24:35 crc kubenswrapper[4745]: I0121 11:24:35.990196 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="extract-utilities" Jan 21 11:24:35 crc kubenswrapper[4745]: E0121 11:24:35.990218 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="extract-content" Jan 21 11:24:35 crc kubenswrapper[4745]: I0121 11:24:35.990225 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="extract-content" Jan 21 11:24:35 crc kubenswrapper[4745]: E0121 11:24:35.990238 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="registry-server" Jan 21 11:24:35 crc kubenswrapper[4745]: I0121 11:24:35.990244 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="registry-server" Jan 21 11:24:35 crc kubenswrapper[4745]: I0121 11:24:35.990468 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b252b476-c688-49b2-bf33-c1c7b4147fcd" containerName="registry-server" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.000151 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.021927 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.050229 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.050350 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.050422 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-866th\" (UniqueName: \"kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.151723 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-866th\" (UniqueName: \"kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.152100 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.152291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.152513 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.152836 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.178771 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-866th\" (UniqueName: \"kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th\") pod \"certified-operators-tgmqg\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.345981 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:36 crc kubenswrapper[4745]: I0121 11:24:36.979710 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:37 crc kubenswrapper[4745]: I0121 11:24:37.492892 4745 generic.go:334] "Generic (PLEG): container finished" podID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerID="b5621de655cef61bbd5469c6c853aa0753860230467cbbf2c88f2a2aa5903bf1" exitCode=0 Jan 21 11:24:37 crc kubenswrapper[4745]: I0121 11:24:37.492935 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerDied","Data":"b5621de655cef61bbd5469c6c853aa0753860230467cbbf2c88f2a2aa5903bf1"} Jan 21 11:24:37 crc kubenswrapper[4745]: I0121 11:24:37.493212 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerStarted","Data":"ea625f8f49749ae333eea7161ae3fd1565d3180b329c110d29194b6cb78fb938"} Jan 21 11:24:38 crc kubenswrapper[4745]: I0121 11:24:38.506107 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerStarted","Data":"5fa40b7e08f9550a298f989f9b3bfc2a14e32206a0267413c9eebf20daaf8f37"} Jan 21 11:24:39 crc kubenswrapper[4745]: I0121 11:24:39.517279 4745 generic.go:334] "Generic (PLEG): container finished" podID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerID="5fa40b7e08f9550a298f989f9b3bfc2a14e32206a0267413c9eebf20daaf8f37" exitCode=0 Jan 21 11:24:39 crc kubenswrapper[4745]: I0121 11:24:39.517329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerDied","Data":"5fa40b7e08f9550a298f989f9b3bfc2a14e32206a0267413c9eebf20daaf8f37"} Jan 21 11:24:40 crc kubenswrapper[4745]: I0121 11:24:40.529273 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerStarted","Data":"d3e36974ce589abef058cd32c2a1f5851adba5ef41c89cc302c78e91c98b91e1"} Jan 21 11:24:40 crc kubenswrapper[4745]: I0121 11:24:40.564724 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tgmqg" podStartSLOduration=3.163179307 podStartE2EDuration="5.564700142s" podCreationTimestamp="2026-01-21 11:24:35 +0000 UTC" firstStartedPulling="2026-01-21 11:24:37.494751129 +0000 UTC m=+2861.955538737" lastFinishedPulling="2026-01-21 11:24:39.896271974 +0000 UTC m=+2864.357059572" observedRunningTime="2026-01-21 11:24:40.556739166 +0000 UTC m=+2865.017526764" watchObservedRunningTime="2026-01-21 11:24:40.564700142 +0000 UTC m=+2865.025487760" Jan 21 11:24:45 crc kubenswrapper[4745]: I0121 11:24:45.867635 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:24:45 crc kubenswrapper[4745]: I0121 11:24:45.868236 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:24:46 crc kubenswrapper[4745]: I0121 11:24:46.346295 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:46 crc kubenswrapper[4745]: I0121 11:24:46.346522 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:46 crc kubenswrapper[4745]: I0121 11:24:46.392705 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:46 crc kubenswrapper[4745]: I0121 11:24:46.655897 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:46 crc kubenswrapper[4745]: I0121 11:24:46.716043 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:48 crc kubenswrapper[4745]: I0121 11:24:48.609822 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tgmqg" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="registry-server" containerID="cri-o://d3e36974ce589abef058cd32c2a1f5851adba5ef41c89cc302c78e91c98b91e1" gracePeriod=2 Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.624614 4745 generic.go:334] "Generic (PLEG): container finished" podID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerID="d3e36974ce589abef058cd32c2a1f5851adba5ef41c89cc302c78e91c98b91e1" exitCode=0 Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.624993 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerDied","Data":"d3e36974ce589abef058cd32c2a1f5851adba5ef41c89cc302c78e91c98b91e1"} Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.625160 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tgmqg" event={"ID":"76021ce7-895d-4c9c-8862-b34ccaa63a22","Type":"ContainerDied","Data":"ea625f8f49749ae333eea7161ae3fd1565d3180b329c110d29194b6cb78fb938"} Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.625187 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea625f8f49749ae333eea7161ae3fd1565d3180b329c110d29194b6cb78fb938" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.629325 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.729056 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities\") pod \"76021ce7-895d-4c9c-8862-b34ccaa63a22\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.729139 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-866th\" (UniqueName: \"kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th\") pod \"76021ce7-895d-4c9c-8862-b34ccaa63a22\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.729188 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content\") pod \"76021ce7-895d-4c9c-8862-b34ccaa63a22\" (UID: \"76021ce7-895d-4c9c-8862-b34ccaa63a22\") " Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.730351 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities" (OuterVolumeSpecName: "utilities") pod "76021ce7-895d-4c9c-8862-b34ccaa63a22" (UID: "76021ce7-895d-4c9c-8862-b34ccaa63a22"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.737787 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th" (OuterVolumeSpecName: "kube-api-access-866th") pod "76021ce7-895d-4c9c-8862-b34ccaa63a22" (UID: "76021ce7-895d-4c9c-8862-b34ccaa63a22"). InnerVolumeSpecName "kube-api-access-866th". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.788415 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "76021ce7-895d-4c9c-8862-b34ccaa63a22" (UID: "76021ce7-895d-4c9c-8862-b34ccaa63a22"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.831324 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.831591 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-866th\" (UniqueName: \"kubernetes.io/projected/76021ce7-895d-4c9c-8862-b34ccaa63a22-kube-api-access-866th\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:49 crc kubenswrapper[4745]: I0121 11:24:49.831669 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/76021ce7-895d-4c9c-8862-b34ccaa63a22-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:24:50 crc kubenswrapper[4745]: I0121 11:24:50.634973 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tgmqg" Jan 21 11:24:50 crc kubenswrapper[4745]: I0121 11:24:50.671760 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:50 crc kubenswrapper[4745]: I0121 11:24:50.686290 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tgmqg"] Jan 21 11:24:52 crc kubenswrapper[4745]: I0121 11:24:52.011839 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" path="/var/lib/kubelet/pods/76021ce7-895d-4c9c-8862-b34ccaa63a22/volumes" Jan 21 11:25:15 crc kubenswrapper[4745]: I0121 11:25:15.866469 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:25:15 crc kubenswrapper[4745]: I0121 11:25:15.867441 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:25:43 crc kubenswrapper[4745]: I0121 11:25:43.183093 4745 generic.go:334] "Generic (PLEG): container finished" podID="2fc7129c-3f8a-42cc-baf6-d499c5582e71" containerID="03bfcf68d53f3e61ea4d6b0fdaa260dee2b84e10a6342e069f816285ec66f3b2" exitCode=0 Jan 21 11:25:43 crc kubenswrapper[4745]: I0121 11:25:43.183167 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" event={"ID":"2fc7129c-3f8a-42cc-baf6-d499c5582e71","Type":"ContainerDied","Data":"03bfcf68d53f3e61ea4d6b0fdaa260dee2b84e10a6342e069f816285ec66f3b2"} Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.650307 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.773579 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.773967 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.774442 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.775851 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.776319 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.776485 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.776623 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56glw\" (UniqueName: \"kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.776731 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.776867 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1\") pod \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\" (UID: \"2fc7129c-3f8a-42cc-baf6-d499c5582e71\") " Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.801063 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw" (OuterVolumeSpecName: "kube-api-access-56glw") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "kube-api-access-56glw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.816867 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.826994 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.837825 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.850690 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.850711 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.854620 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.855139 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.858781 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory" (OuterVolumeSpecName: "inventory") pod "2fc7129c-3f8a-42cc-baf6-d499c5582e71" (UID: "2fc7129c-3f8a-42cc-baf6-d499c5582e71"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.883463 4745 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.883677 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.883733 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886741 4745 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886783 4745 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886794 4745 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886808 4745 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886819 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56glw\" (UniqueName: \"kubernetes.io/projected/2fc7129c-3f8a-42cc-baf6-d499c5582e71-kube-api-access-56glw\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:44 crc kubenswrapper[4745]: I0121 11:25:44.886828 4745 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc7129c-3f8a-42cc-baf6-d499c5582e71-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.208958 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.208978 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-gmznd" event={"ID":"2fc7129c-3f8a-42cc-baf6-d499c5582e71","Type":"ContainerDied","Data":"7f540024449e4e5b50a99bfcf33066c46f1dbeeb5b93fd473341a4ea724bc3e4"} Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.209412 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f540024449e4e5b50a99bfcf33066c46f1dbeeb5b93fd473341a4ea724bc3e4" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.355219 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj"] Jan 21 11:25:45 crc kubenswrapper[4745]: E0121 11:25:45.355794 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="extract-content" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.355815 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="extract-content" Jan 21 11:25:45 crc kubenswrapper[4745]: E0121 11:25:45.355829 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc7129c-3f8a-42cc-baf6-d499c5582e71" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.355836 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc7129c-3f8a-42cc-baf6-d499c5582e71" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:25:45 crc kubenswrapper[4745]: E0121 11:25:45.355849 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="registry-server" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.355856 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="registry-server" Jan 21 11:25:45 crc kubenswrapper[4745]: E0121 11:25:45.355883 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="extract-utilities" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.355890 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="extract-utilities" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.356095 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="76021ce7-895d-4c9c-8862-b34ccaa63a22" containerName="registry-server" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.356126 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc7129c-3f8a-42cc-baf6-d499c5582e71" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.361080 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.364399 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-kfn2t" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.364608 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.364730 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.370187 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.371307 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.397926 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398001 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398039 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398065 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398122 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398160 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398228 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.398270 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj"] Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500556 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500647 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500693 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500731 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500759 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500819 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.500856 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.505953 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.506086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.506355 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.507914 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.509561 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.511331 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.520057 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.692694 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.867059 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.867587 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.867941 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.868853 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:25:45 crc kubenswrapper[4745]: I0121 11:25:45.868914 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c" gracePeriod=600 Jan 21 11:25:46 crc kubenswrapper[4745]: I0121 11:25:46.226882 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c" exitCode=0 Jan 21 11:25:46 crc kubenswrapper[4745]: I0121 11:25:46.226943 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c"} Jan 21 11:25:46 crc kubenswrapper[4745]: I0121 11:25:46.226996 4745 scope.go:117] "RemoveContainer" containerID="a82180fe9516c60da5d638cc4a45a91553017f30b5e5a3bffcf26d3208330628" Jan 21 11:25:46 crc kubenswrapper[4745]: I0121 11:25:46.315112 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj"] Jan 21 11:25:46 crc kubenswrapper[4745]: W0121 11:25:46.319220 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e7aba1_1062_43a9_8a86_9b6ceba23c72.slice/crio-ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf WatchSource:0}: Error finding container ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf: Status 404 returned error can't find the container with id ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf Jan 21 11:25:47 crc kubenswrapper[4745]: I0121 11:25:47.242212 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" event={"ID":"07e7aba1-1062-43a9-8a86-9b6ceba23c72","Type":"ContainerStarted","Data":"03b9397b8014ecab93875a690ecec277e366538165f79e916001eac85d7cc0a6"} Jan 21 11:25:47 crc kubenswrapper[4745]: I0121 11:25:47.244238 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" event={"ID":"07e7aba1-1062-43a9-8a86-9b6ceba23c72","Type":"ContainerStarted","Data":"ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf"} Jan 21 11:25:47 crc kubenswrapper[4745]: I0121 11:25:47.252070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929"} Jan 21 11:25:47 crc kubenswrapper[4745]: I0121 11:25:47.294044 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" podStartSLOduration=1.857178475 podStartE2EDuration="2.294015849s" podCreationTimestamp="2026-01-21 11:25:45 +0000 UTC" firstStartedPulling="2026-01-21 11:25:46.321468288 +0000 UTC m=+2930.782255886" lastFinishedPulling="2026-01-21 11:25:46.758305652 +0000 UTC m=+2931.219093260" observedRunningTime="2026-01-21 11:25:47.274882099 +0000 UTC m=+2931.735669727" watchObservedRunningTime="2026-01-21 11:25:47.294015849 +0000 UTC m=+2931.754803457" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.639007 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.643653 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.677812 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qf6\" (UniqueName: \"kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.677893 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.677975 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.679893 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.779471 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.779600 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qf6\" (UniqueName: \"kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.779664 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.780291 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.780561 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.800997 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qf6\" (UniqueName: \"kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6\") pod \"community-operators-wt9kj\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:02 crc kubenswrapper[4745]: I0121 11:26:02.967823 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:03 crc kubenswrapper[4745]: I0121 11:26:03.549656 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:04 crc kubenswrapper[4745]: I0121 11:26:04.438156 4745 generic.go:334] "Generic (PLEG): container finished" podID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerID="fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af" exitCode=0 Jan 21 11:26:04 crc kubenswrapper[4745]: I0121 11:26:04.438723 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerDied","Data":"fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af"} Jan 21 11:26:04 crc kubenswrapper[4745]: I0121 11:26:04.438756 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerStarted","Data":"0f62a1fe26fa03c046042c5e5e9f9ef3f4bea4ff31c7fdb530012f79c5936aa1"} Jan 21 11:26:05 crc kubenswrapper[4745]: I0121 11:26:05.449762 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerStarted","Data":"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775"} Jan 21 11:26:06 crc kubenswrapper[4745]: I0121 11:26:06.460511 4745 generic.go:334] "Generic (PLEG): container finished" podID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerID="c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775" exitCode=0 Jan 21 11:26:06 crc kubenswrapper[4745]: I0121 11:26:06.460596 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerDied","Data":"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775"} Jan 21 11:26:07 crc kubenswrapper[4745]: I0121 11:26:07.479437 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerStarted","Data":"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363"} Jan 21 11:26:07 crc kubenswrapper[4745]: I0121 11:26:07.510611 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wt9kj" podStartSLOduration=3.039046581 podStartE2EDuration="5.510587546s" podCreationTimestamp="2026-01-21 11:26:02 +0000 UTC" firstStartedPulling="2026-01-21 11:26:04.441077576 +0000 UTC m=+2948.901865174" lastFinishedPulling="2026-01-21 11:26:06.912618531 +0000 UTC m=+2951.373406139" observedRunningTime="2026-01-21 11:26:07.503211437 +0000 UTC m=+2951.963999035" watchObservedRunningTime="2026-01-21 11:26:07.510587546 +0000 UTC m=+2951.971375154" Jan 21 11:26:12 crc kubenswrapper[4745]: I0121 11:26:12.968476 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:12 crc kubenswrapper[4745]: I0121 11:26:12.969365 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:13 crc kubenswrapper[4745]: I0121 11:26:13.015873 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:13 crc kubenswrapper[4745]: I0121 11:26:13.589389 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:13 crc kubenswrapper[4745]: I0121 11:26:13.645349 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:15 crc kubenswrapper[4745]: I0121 11:26:15.558606 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wt9kj" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="registry-server" containerID="cri-o://3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363" gracePeriod=2 Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.009288 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.185845 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities\") pod \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.185907 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content\") pod \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.185964 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qf6\" (UniqueName: \"kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6\") pod \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\" (UID: \"2cfaa522-fcbe-4c12-8c04-22688a9cb03c\") " Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.187137 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities" (OuterVolumeSpecName: "utilities") pod "2cfaa522-fcbe-4c12-8c04-22688a9cb03c" (UID: "2cfaa522-fcbe-4c12-8c04-22688a9cb03c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.199758 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6" (OuterVolumeSpecName: "kube-api-access-44qf6") pod "2cfaa522-fcbe-4c12-8c04-22688a9cb03c" (UID: "2cfaa522-fcbe-4c12-8c04-22688a9cb03c"). InnerVolumeSpecName "kube-api-access-44qf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.271969 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cfaa522-fcbe-4c12-8c04-22688a9cb03c" (UID: "2cfaa522-fcbe-4c12-8c04-22688a9cb03c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.288889 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.288965 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.288997 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qf6\" (UniqueName: \"kubernetes.io/projected/2cfaa522-fcbe-4c12-8c04-22688a9cb03c-kube-api-access-44qf6\") on node \"crc\" DevicePath \"\"" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.571313 4745 generic.go:334] "Generic (PLEG): container finished" podID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerID="3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363" exitCode=0 Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.571396 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerDied","Data":"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363"} Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.571427 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wt9kj" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.571463 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wt9kj" event={"ID":"2cfaa522-fcbe-4c12-8c04-22688a9cb03c","Type":"ContainerDied","Data":"0f62a1fe26fa03c046042c5e5e9f9ef3f4bea4ff31c7fdb530012f79c5936aa1"} Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.571497 4745 scope.go:117] "RemoveContainer" containerID="3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.593628 4745 scope.go:117] "RemoveContainer" containerID="c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.625642 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.627109 4745 scope.go:117] "RemoveContainer" containerID="fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.636120 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wt9kj"] Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.672199 4745 scope.go:117] "RemoveContainer" containerID="3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363" Jan 21 11:26:16 crc kubenswrapper[4745]: E0121 11:26:16.674393 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363\": container with ID starting with 3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363 not found: ID does not exist" containerID="3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.674453 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363"} err="failed to get container status \"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363\": rpc error: code = NotFound desc = could not find container \"3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363\": container with ID starting with 3419bcf48b3c7256fc934b28e2db483202852a5266ad7148ffd50d7561978363 not found: ID does not exist" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.674492 4745 scope.go:117] "RemoveContainer" containerID="c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775" Jan 21 11:26:16 crc kubenswrapper[4745]: E0121 11:26:16.674994 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775\": container with ID starting with c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775 not found: ID does not exist" containerID="c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.675027 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775"} err="failed to get container status \"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775\": rpc error: code = NotFound desc = could not find container \"c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775\": container with ID starting with c9913651f8c3706eb4620b42fa5c157ee478567cae87e8c5250ecec3beb91775 not found: ID does not exist" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.675041 4745 scope.go:117] "RemoveContainer" containerID="fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af" Jan 21 11:26:16 crc kubenswrapper[4745]: E0121 11:26:16.675390 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af\": container with ID starting with fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af not found: ID does not exist" containerID="fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af" Jan 21 11:26:16 crc kubenswrapper[4745]: I0121 11:26:16.675410 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af"} err="failed to get container status \"fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af\": rpc error: code = NotFound desc = could not find container \"fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af\": container with ID starting with fe8d3f9da764954893d2825f17e7cf57f4929892d2f9e7c1c5de84784887f8af not found: ID does not exist" Jan 21 11:26:18 crc kubenswrapper[4745]: I0121 11:26:18.017673 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" path="/var/lib/kubelet/pods/2cfaa522-fcbe-4c12-8c04-22688a9cb03c/volumes" Jan 21 11:28:15 crc kubenswrapper[4745]: I0121 11:28:15.867762 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:28:15 crc kubenswrapper[4745]: I0121 11:28:15.868738 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:28:45 crc kubenswrapper[4745]: I0121 11:28:45.866538 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:28:45 crc kubenswrapper[4745]: I0121 11:28:45.867142 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:29:01 crc kubenswrapper[4745]: I0121 11:29:01.207509 4745 generic.go:334] "Generic (PLEG): container finished" podID="07e7aba1-1062-43a9-8a86-9b6ceba23c72" containerID="03b9397b8014ecab93875a690ecec277e366538165f79e916001eac85d7cc0a6" exitCode=0 Jan 21 11:29:01 crc kubenswrapper[4745]: I0121 11:29:01.207609 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" event={"ID":"07e7aba1-1062-43a9-8a86-9b6ceba23c72","Type":"ContainerDied","Data":"03b9397b8014ecab93875a690ecec277e366538165f79e916001eac85d7cc0a6"} Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.737125 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.835647 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.835785 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.835897 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.835988 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.836027 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.836057 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.836099 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1\") pod \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\" (UID: \"07e7aba1-1062-43a9-8a86-9b6ceba23c72\") " Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.841326 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.842059 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c" (OuterVolumeSpecName: "kube-api-access-mq57c") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "kube-api-access-mq57c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.867720 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.868706 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.870045 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.871327 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.872011 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory" (OuterVolumeSpecName: "inventory") pod "07e7aba1-1062-43a9-8a86-9b6ceba23c72" (UID: "07e7aba1-1062-43a9-8a86-9b6ceba23c72"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938299 4745 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938333 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mq57c\" (UniqueName: \"kubernetes.io/projected/07e7aba1-1062-43a9-8a86-9b6ceba23c72-kube-api-access-mq57c\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938343 4745 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938352 4745 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-inventory\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938362 4745 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938371 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:02 crc kubenswrapper[4745]: I0121 11:29:02.938380 4745 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/07e7aba1-1062-43a9-8a86-9b6ceba23c72-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 21 11:29:03 crc kubenswrapper[4745]: I0121 11:29:03.228428 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" event={"ID":"07e7aba1-1062-43a9-8a86-9b6ceba23c72","Type":"ContainerDied","Data":"ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf"} Jan 21 11:29:03 crc kubenswrapper[4745]: I0121 11:29:03.228731 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad88c494ec066f2b88dcfb9d3b9d402038d1c55584fce2e42381f2a6ca249aaf" Jan 21 11:29:03 crc kubenswrapper[4745]: I0121 11:29:03.228843 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj" Jan 21 11:29:15 crc kubenswrapper[4745]: I0121 11:29:15.866797 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:29:15 crc kubenswrapper[4745]: I0121 11:29:15.869374 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:29:15 crc kubenswrapper[4745]: I0121 11:29:15.869624 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:29:15 crc kubenswrapper[4745]: I0121 11:29:15.870885 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:29:15 crc kubenswrapper[4745]: I0121 11:29:15.871040 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" gracePeriod=600 Jan 21 11:29:15 crc kubenswrapper[4745]: E0121 11:29:15.993227 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:29:16 crc kubenswrapper[4745]: I0121 11:29:16.352363 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" exitCode=0 Jan 21 11:29:16 crc kubenswrapper[4745]: I0121 11:29:16.352410 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929"} Jan 21 11:29:16 crc kubenswrapper[4745]: I0121 11:29:16.352466 4745 scope.go:117] "RemoveContainer" containerID="aa3dc7b3225d6765513e49f385b7965256da8bb3f43b10e15df1d49cfb026b0c" Jan 21 11:29:16 crc kubenswrapper[4745]: I0121 11:29:16.353648 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:29:16 crc kubenswrapper[4745]: E0121 11:29:16.354231 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:29:29 crc kubenswrapper[4745]: I0121 11:29:29.000320 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:29:29 crc kubenswrapper[4745]: E0121 11:29:29.001144 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:29:41 crc kubenswrapper[4745]: I0121 11:29:41.000773 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:29:41 crc kubenswrapper[4745]: E0121 11:29:41.001573 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:29:53 crc kubenswrapper[4745]: I0121 11:29:53.000719 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:29:53 crc kubenswrapper[4745]: E0121 11:29:53.001497 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.165231 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts"] Jan 21 11:30:00 crc kubenswrapper[4745]: E0121 11:30:00.166394 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="registry-server" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166411 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="registry-server" Jan 21 11:30:00 crc kubenswrapper[4745]: E0121 11:30:00.166442 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="extract-content" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166451 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="extract-content" Jan 21 11:30:00 crc kubenswrapper[4745]: E0121 11:30:00.166478 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="extract-utilities" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166489 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="extract-utilities" Jan 21 11:30:00 crc kubenswrapper[4745]: E0121 11:30:00.166501 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e7aba1-1062-43a9-8a86-9b6ceba23c72" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166510 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e7aba1-1062-43a9-8a86-9b6ceba23c72" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166887 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfaa522-fcbe-4c12-8c04-22688a9cb03c" containerName="registry-server" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.166920 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e7aba1-1062-43a9-8a86-9b6ceba23c72" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.168560 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.176056 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.176410 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.200389 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts"] Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.226127 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v58gl\" (UniqueName: \"kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.226501 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.226572 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.330901 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.330953 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.331073 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v58gl\" (UniqueName: \"kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.332007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.347097 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.350153 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v58gl\" (UniqueName: \"kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl\") pod \"collect-profiles-29483250-596ts\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.505756 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:00 crc kubenswrapper[4745]: I0121 11:30:00.954573 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts"] Jan 21 11:30:01 crc kubenswrapper[4745]: I0121 11:30:01.761809 4745 generic.go:334] "Generic (PLEG): container finished" podID="67cd1a0a-60ad-4d9d-a498-b13cd535b86d" containerID="49303b0c9710aca04e6c06e2a805f0423dd073ac3acb1407ede5cab0a5795e2d" exitCode=0 Jan 21 11:30:01 crc kubenswrapper[4745]: I0121 11:30:01.761889 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" event={"ID":"67cd1a0a-60ad-4d9d-a498-b13cd535b86d","Type":"ContainerDied","Data":"49303b0c9710aca04e6c06e2a805f0423dd073ac3acb1407ede5cab0a5795e2d"} Jan 21 11:30:01 crc kubenswrapper[4745]: I0121 11:30:01.762174 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" event={"ID":"67cd1a0a-60ad-4d9d-a498-b13cd535b86d","Type":"ContainerStarted","Data":"00cd6e949a0c5d89a160f3ade7cac95a096c713098522576ff64cecc3908559c"} Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.115772 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.190197 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume\") pod \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.190491 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v58gl\" (UniqueName: \"kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl\") pod \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.190602 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume\") pod \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\" (UID: \"67cd1a0a-60ad-4d9d-a498-b13cd535b86d\") " Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.190803 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume" (OuterVolumeSpecName: "config-volume") pod "67cd1a0a-60ad-4d9d-a498-b13cd535b86d" (UID: "67cd1a0a-60ad-4d9d-a498-b13cd535b86d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.191289 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.198798 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl" (OuterVolumeSpecName: "kube-api-access-v58gl") pod "67cd1a0a-60ad-4d9d-a498-b13cd535b86d" (UID: "67cd1a0a-60ad-4d9d-a498-b13cd535b86d"). InnerVolumeSpecName "kube-api-access-v58gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.209778 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "67cd1a0a-60ad-4d9d-a498-b13cd535b86d" (UID: "67cd1a0a-60ad-4d9d-a498-b13cd535b86d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.221956 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 21 11:30:03 crc kubenswrapper[4745]: E0121 11:30:03.222427 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67cd1a0a-60ad-4d9d-a498-b13cd535b86d" containerName="collect-profiles" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.222447 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="67cd1a0a-60ad-4d9d-a498-b13cd535b86d" containerName="collect-profiles" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.222746 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="67cd1a0a-60ad-4d9d-a498-b13cd535b86d" containerName="collect-profiles" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.225666 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.228129 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.228744 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.228921 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.229053 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rqj79" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.234552 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293048 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293090 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293145 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293252 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293278 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293312 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293327 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293349 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n46jc\" (UniqueName: \"kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293396 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v58gl\" (UniqueName: \"kubernetes.io/projected/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-kube-api-access-v58gl\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.293410 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/67cd1a0a-60ad-4d9d-a498-b13cd535b86d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.394846 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395159 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395323 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395448 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395601 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395679 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395772 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395781 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n46jc\" (UniqueName: \"kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395851 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395878 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.395945 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.397292 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.397760 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.398344 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.400076 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.400245 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.408430 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.414478 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n46jc\" (UniqueName: \"kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.426130 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.624078 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.781697 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" event={"ID":"67cd1a0a-60ad-4d9d-a498-b13cd535b86d","Type":"ContainerDied","Data":"00cd6e949a0c5d89a160f3ade7cac95a096c713098522576ff64cecc3908559c"} Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.781989 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00cd6e949a0c5d89a160f3ade7cac95a096c713098522576ff64cecc3908559c" Jan 21 11:30:03 crc kubenswrapper[4745]: I0121 11:30:03.782051 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts" Jan 21 11:30:04 crc kubenswrapper[4745]: W0121 11:30:04.188721 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dc068ac_4289_4996_8263_d1db282282cd.slice/crio-1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840 WatchSource:0}: Error finding container 1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840: Status 404 returned error can't find the container with id 1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840 Jan 21 11:30:04 crc kubenswrapper[4745]: I0121 11:30:04.192344 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 21 11:30:04 crc kubenswrapper[4745]: I0121 11:30:04.194289 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:30:04 crc kubenswrapper[4745]: I0121 11:30:04.228271 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5"] Jan 21 11:30:04 crc kubenswrapper[4745]: I0121 11:30:04.246069 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483205-2pqw5"] Jan 21 11:30:04 crc kubenswrapper[4745]: I0121 11:30:04.795712 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"7dc068ac-4289-4996-8263-d1db282282cd","Type":"ContainerStarted","Data":"1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840"} Jan 21 11:30:06 crc kubenswrapper[4745]: I0121 11:30:06.010424 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:30:06 crc kubenswrapper[4745]: E0121 11:30:06.011717 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:06 crc kubenswrapper[4745]: I0121 11:30:06.018364 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db889bc-c207-4047-8b3a-47037f71ac5c" path="/var/lib/kubelet/pods/7db889bc-c207-4047-8b3a-47037f71ac5c/volumes" Jan 21 11:30:17 crc kubenswrapper[4745]: I0121 11:30:17.001046 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:30:17 crc kubenswrapper[4745]: E0121 11:30:17.002014 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:28 crc kubenswrapper[4745]: I0121 11:30:28.000204 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:30:28 crc kubenswrapper[4745]: E0121 11:30:28.001372 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:42 crc kubenswrapper[4745]: I0121 11:30:42.001804 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:30:42 crc kubenswrapper[4745]: E0121 11:30:42.003062 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:55 crc kubenswrapper[4745]: I0121 11:30:54.999501 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:30:55 crc kubenswrapper[4745]: E0121 11:30:55.000114 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:30:55 crc kubenswrapper[4745]: I0121 11:30:55.507029 4745 scope.go:117] "RemoveContainer" containerID="966810c431842121881a480692de06203687ac5b3410032f16f296f7b5f66ef2" Jan 21 11:30:56 crc kubenswrapper[4745]: I0121 11:30:56.022857 4745 scope.go:117] "RemoveContainer" containerID="d3e36974ce589abef058cd32c2a1f5851adba5ef41c89cc302c78e91c98b91e1" Jan 21 11:30:56 crc kubenswrapper[4745]: I0121 11:30:56.069950 4745 scope.go:117] "RemoveContainer" containerID="b5621de655cef61bbd5469c6c853aa0753860230467cbbf2c88f2a2aa5903bf1" Jan 21 11:30:56 crc kubenswrapper[4745]: E0121 11:30:56.100156 4745 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 21 11:30:56 crc kubenswrapper[4745]: E0121 11:30:56.100202 4745 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb" Jan 21 11:30:56 crc kubenswrapper[4745]: E0121 11:30:56.100357 4745 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n46jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(7dc068ac-4289-4996-8263-d1db282282cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 21 11:30:56 crc kubenswrapper[4745]: E0121 11:30:56.102372 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="7dc068ac-4289-4996-8263-d1db282282cd" Jan 21 11:30:56 crc kubenswrapper[4745]: I0121 11:30:56.107980 4745 scope.go:117] "RemoveContainer" containerID="5fa40b7e08f9550a298f989f9b3bfc2a14e32206a0267413c9eebf20daaf8f37" Jan 21 11:30:56 crc kubenswrapper[4745]: E0121 11:30:56.350486 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:c3923531bcda0b0811b2d5053f189beb\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="7dc068ac-4289-4996-8263-d1db282282cd" Jan 21 11:31:10 crc kubenswrapper[4745]: I0121 11:31:10.000589 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:31:10 crc kubenswrapper[4745]: E0121 11:31:10.001498 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:31:12 crc kubenswrapper[4745]: I0121 11:31:12.200703 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 21 11:31:13 crc kubenswrapper[4745]: I0121 11:31:13.515722 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"7dc068ac-4289-4996-8263-d1db282282cd","Type":"ContainerStarted","Data":"e59c725bc540d2a468b644ded7efc006be676656febd5573d80a727dcd06cb5f"} Jan 21 11:31:13 crc kubenswrapper[4745]: I0121 11:31:13.545613 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=3.541485539 podStartE2EDuration="1m11.545594317s" podCreationTimestamp="2026-01-21 11:30:02 +0000 UTC" firstStartedPulling="2026-01-21 11:30:04.194036226 +0000 UTC m=+3188.654823834" lastFinishedPulling="2026-01-21 11:31:12.198145014 +0000 UTC m=+3256.658932612" observedRunningTime="2026-01-21 11:31:13.535496564 +0000 UTC m=+3257.996284162" watchObservedRunningTime="2026-01-21 11:31:13.545594317 +0000 UTC m=+3258.006381915" Jan 21 11:31:23 crc kubenswrapper[4745]: I0121 11:31:23.000925 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:31:23 crc kubenswrapper[4745]: E0121 11:31:23.001698 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:31:37 crc kubenswrapper[4745]: I0121 11:31:37.000056 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:31:37 crc kubenswrapper[4745]: E0121 11:31:37.000938 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:31:51 crc kubenswrapper[4745]: I0121 11:31:51.000370 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:31:51 crc kubenswrapper[4745]: E0121 11:31:51.001150 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:32:03 crc kubenswrapper[4745]: I0121 11:32:03.000517 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:32:03 crc kubenswrapper[4745]: E0121 11:32:03.001410 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:32:16 crc kubenswrapper[4745]: I0121 11:32:16.006491 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:32:16 crc kubenswrapper[4745]: E0121 11:32:16.007654 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:32:24 crc kubenswrapper[4745]: I0121 11:32:24.152126 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.011690037s: [/var/lib/containers/storage/overlay/91ef8ada651de23d31088043127fd3034bcf6ec6bfc882120bce8f5f1bb700b0/diff /var/log/pods/openshift-apiserver_apiserver-76f77b778f-n7p28_c531fa6e-de28-476b-8b34-aca8b0e2cc56/openshift-apiserver/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:32:29 crc kubenswrapper[4745]: I0121 11:32:29.000499 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:32:29 crc kubenswrapper[4745]: E0121 11:32:29.001151 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:32:43 crc kubenswrapper[4745]: I0121 11:32:43.001016 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:32:43 crc kubenswrapper[4745]: E0121 11:32:43.002124 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:32:55 crc kubenswrapper[4745]: I0121 11:32:55.000976 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:32:55 crc kubenswrapper[4745]: E0121 11:32:55.001911 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:33:08 crc kubenswrapper[4745]: I0121 11:33:08.001000 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:33:08 crc kubenswrapper[4745]: E0121 11:33:08.001855 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:33:21 crc kubenswrapper[4745]: I0121 11:33:21.000870 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:33:21 crc kubenswrapper[4745]: E0121 11:33:21.001828 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:33:25 crc kubenswrapper[4745]: I0121 11:33:25.944826 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:33:25 crc kubenswrapper[4745]: I0121 11:33:25.949429 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.062983 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gsz\" (UniqueName: \"kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.063142 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.063225 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.143347 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.180814 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.180900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.180976 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7gsz\" (UniqueName: \"kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.184575 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.185303 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.247684 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7gsz\" (UniqueName: \"kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz\") pod \"redhat-operators-kv5z6\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:26 crc kubenswrapper[4745]: I0121 11:33:26.272475 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:27 crc kubenswrapper[4745]: I0121 11:33:27.591737 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:33:27 crc kubenswrapper[4745]: I0121 11:33:27.728195 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerStarted","Data":"4f93ad59135e12e2cbc0f6e87d20bb84ee70ca8b3df528e51e8320149cc88795"} Jan 21 11:33:28 crc kubenswrapper[4745]: I0121 11:33:28.738451 4745 generic.go:334] "Generic (PLEG): container finished" podID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerID="14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756" exitCode=0 Jan 21 11:33:28 crc kubenswrapper[4745]: I0121 11:33:28.738572 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerDied","Data":"14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756"} Jan 21 11:33:30 crc kubenswrapper[4745]: I0121 11:33:30.757561 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerStarted","Data":"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384"} Jan 21 11:33:33 crc kubenswrapper[4745]: E0121 11:33:33.221039 4745 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.221s" Jan 21 11:33:35 crc kubenswrapper[4745]: I0121 11:33:35.000263 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:33:35 crc kubenswrapper[4745]: E0121 11:33:35.000814 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:33:36 crc kubenswrapper[4745]: I0121 11:33:36.810752 4745 generic.go:334] "Generic (PLEG): container finished" podID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerID="1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384" exitCode=0 Jan 21 11:33:36 crc kubenswrapper[4745]: I0121 11:33:36.810845 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerDied","Data":"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384"} Jan 21 11:33:37 crc kubenswrapper[4745]: I0121 11:33:37.821790 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerStarted","Data":"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f"} Jan 21 11:33:37 crc kubenswrapper[4745]: I0121 11:33:37.847793 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kv5z6" podStartSLOduration=4.130363571 podStartE2EDuration="12.847074243s" podCreationTimestamp="2026-01-21 11:33:25 +0000 UTC" firstStartedPulling="2026-01-21 11:33:28.740586052 +0000 UTC m=+3393.201373650" lastFinishedPulling="2026-01-21 11:33:37.457296724 +0000 UTC m=+3401.918084322" observedRunningTime="2026-01-21 11:33:37.846451866 +0000 UTC m=+3402.307239464" watchObservedRunningTime="2026-01-21 11:33:37.847074243 +0000 UTC m=+3402.307861851" Jan 21 11:33:39 crc kubenswrapper[4745]: I0121 11:33:39.282187 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.178598259s: [/var/lib/containers/storage/overlay/c9f574a1edbe6b0b99653005b95000dbea5d09c6241fc795cc04b483ce6623b1/diff /var/log/pods/openstack_openstack-galera-0_c2b5df3e-a44d-42ff-96a4-2bfd32db45bf/galera/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:33:46 crc kubenswrapper[4745]: I0121 11:33:46.273688 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:46 crc kubenswrapper[4745]: I0121 11:33:46.274275 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:33:48 crc kubenswrapper[4745]: I0121 11:33:48.201398 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kv5z6" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" probeResult="failure" output=< Jan 21 11:33:48 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:33:48 crc kubenswrapper[4745]: > Jan 21 11:33:50 crc kubenswrapper[4745]: I0121 11:33:50.000289 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:33:50 crc kubenswrapper[4745]: E0121 11:33:50.000796 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:33:50 crc kubenswrapper[4745]: I0121 11:33:50.688891 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" podUID="59cfcfcd-7ed9-4f60-85ad-fcb228dc1895" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.301220 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.305010 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.318711 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.386347 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.386601 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.386707 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlldv\" (UniqueName: \"kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.488781 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.488927 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.488988 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlldv\" (UniqueName: \"kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.489516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.489817 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.615039 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlldv\" (UniqueName: \"kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv\") pod \"redhat-marketplace-5lh2g\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:55 crc kubenswrapper[4745]: I0121 11:33:55.655810 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:33:56 crc kubenswrapper[4745]: I0121 11:33:56.260175 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:33:56 crc kubenswrapper[4745]: I0121 11:33:56.454170 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerStarted","Data":"685aefeabf8043cb4871448d2b2643b6c16b997996dcb00d90e6db1b9bc383b7"} Jan 21 11:33:57 crc kubenswrapper[4745]: I0121 11:33:57.352831 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-kv5z6" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" probeResult="failure" output=< Jan 21 11:33:57 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:33:57 crc kubenswrapper[4745]: > Jan 21 11:33:57 crc kubenswrapper[4745]: I0121 11:33:57.474180 4745 generic.go:334] "Generic (PLEG): container finished" podID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerID="f6409ec672d822a2cdff259ad730acf7a41682dd2e159001e4b1f21c915a8bec" exitCode=0 Jan 21 11:33:57 crc kubenswrapper[4745]: I0121 11:33:57.474237 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerDied","Data":"f6409ec672d822a2cdff259ad730acf7a41682dd2e159001e4b1f21c915a8bec"} Jan 21 11:33:59 crc kubenswrapper[4745]: I0121 11:33:59.511381 4745 generic.go:334] "Generic (PLEG): container finished" podID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerID="72702c499c017f79fc9fa7d48a39ae0d94fa2aeb2029966cae55f84fc26bf58a" exitCode=0 Jan 21 11:33:59 crc kubenswrapper[4745]: I0121 11:33:59.511424 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerDied","Data":"72702c499c017f79fc9fa7d48a39ae0d94fa2aeb2029966cae55f84fc26bf58a"} Jan 21 11:34:00 crc kubenswrapper[4745]: I0121 11:34:00.521666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerStarted","Data":"3cbf62e8264530fa4b3c7cc71eadd333e62cbd0fbeccf485bced23756537c7fe"} Jan 21 11:34:01 crc kubenswrapper[4745]: I0121 11:34:01.565089 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5lh2g" podStartSLOduration=4.031122403 podStartE2EDuration="6.565059098s" podCreationTimestamp="2026-01-21 11:33:55 +0000 UTC" firstStartedPulling="2026-01-21 11:33:57.477511744 +0000 UTC m=+3421.938299342" lastFinishedPulling="2026-01-21 11:34:00.011448439 +0000 UTC m=+3424.472236037" observedRunningTime="2026-01-21 11:34:01.55075513 +0000 UTC m=+3426.011542728" watchObservedRunningTime="2026-01-21 11:34:01.565059098 +0000 UTC m=+3426.025846706" Jan 21 11:34:02 crc kubenswrapper[4745]: E0121 11:34:02.826489 4745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.78:56256->38.129.56.78:36213: write tcp 38.129.56.78:56256->38.129.56.78:36213: write: broken pipe Jan 21 11:34:02 crc kubenswrapper[4745]: E0121 11:34:02.826488 4745 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.129.56.78:56256->38.129.56.78:36213: read tcp 38.129.56.78:56256->38.129.56.78:36213: read: connection reset by peer Jan 21 11:34:04 crc kubenswrapper[4745]: I0121 11:34:03.999993 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:34:04 crc kubenswrapper[4745]: E0121 11:34:04.000488 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:34:05 crc kubenswrapper[4745]: I0121 11:34:05.656148 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:05 crc kubenswrapper[4745]: I0121 11:34:05.657677 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:05 crc kubenswrapper[4745]: I0121 11:34:05.712638 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:06 crc kubenswrapper[4745]: I0121 11:34:06.341770 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:34:06 crc kubenswrapper[4745]: I0121 11:34:06.391251 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:34:06 crc kubenswrapper[4745]: I0121 11:34:06.627036 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:07 crc kubenswrapper[4745]: I0121 11:34:07.158470 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:34:07 crc kubenswrapper[4745]: I0121 11:34:07.582960 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kv5z6" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" containerID="cri-o://e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f" gracePeriod=2 Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.200694 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.265071 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content\") pod \"ff122cbb-4798-4fca-a61d-6f4ca070d626\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.265397 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities\") pod \"ff122cbb-4798-4fca-a61d-6f4ca070d626\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.265459 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7gsz\" (UniqueName: \"kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz\") pod \"ff122cbb-4798-4fca-a61d-6f4ca070d626\" (UID: \"ff122cbb-4798-4fca-a61d-6f4ca070d626\") " Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.269969 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities" (OuterVolumeSpecName: "utilities") pod "ff122cbb-4798-4fca-a61d-6f4ca070d626" (UID: "ff122cbb-4798-4fca-a61d-6f4ca070d626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.298635 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz" (OuterVolumeSpecName: "kube-api-access-g7gsz") pod "ff122cbb-4798-4fca-a61d-6f4ca070d626" (UID: "ff122cbb-4798-4fca-a61d-6f4ca070d626"). InnerVolumeSpecName "kube-api-access-g7gsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.368034 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.368257 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7gsz\" (UniqueName: \"kubernetes.io/projected/ff122cbb-4798-4fca-a61d-6f4ca070d626-kube-api-access-g7gsz\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.440204 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff122cbb-4798-4fca-a61d-6f4ca070d626" (UID: "ff122cbb-4798-4fca-a61d-6f4ca070d626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.470889 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff122cbb-4798-4fca-a61d-6f4ca070d626-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.592076 4745 generic.go:334] "Generic (PLEG): container finished" podID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerID="e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f" exitCode=0 Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.592136 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kv5z6" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.592163 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerDied","Data":"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f"} Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.592505 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kv5z6" event={"ID":"ff122cbb-4798-4fca-a61d-6f4ca070d626","Type":"ContainerDied","Data":"4f93ad59135e12e2cbc0f6e87d20bb84ee70ca8b3df528e51e8320149cc88795"} Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.592539 4745 scope.go:117] "RemoveContainer" containerID="e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.620938 4745 scope.go:117] "RemoveContainer" containerID="1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.642014 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.645993 4745 scope.go:117] "RemoveContainer" containerID="14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.704365 4745 scope.go:117] "RemoveContainer" containerID="e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f" Jan 21 11:34:08 crc kubenswrapper[4745]: E0121 11:34:08.711092 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f\": container with ID starting with e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f not found: ID does not exist" containerID="e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.718565 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f"} err="failed to get container status \"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f\": rpc error: code = NotFound desc = could not find container \"e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f\": container with ID starting with e8619836f6c0be4f42974076d6862fe176f284d4a550bbbecb52cebb0189039f not found: ID does not exist" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.718648 4745 scope.go:117] "RemoveContainer" containerID="1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384" Jan 21 11:34:08 crc kubenswrapper[4745]: E0121 11:34:08.720334 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384\": container with ID starting with 1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384 not found: ID does not exist" containerID="1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.720441 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384"} err="failed to get container status \"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384\": rpc error: code = NotFound desc = could not find container \"1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384\": container with ID starting with 1067f10b1114fdfadf96cf20640d7996c5280588ab790358f5b60503e8aa7384 not found: ID does not exist" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.720517 4745 scope.go:117] "RemoveContainer" containerID="14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756" Jan 21 11:34:08 crc kubenswrapper[4745]: E0121 11:34:08.722193 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756\": container with ID starting with 14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756 not found: ID does not exist" containerID="14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.722251 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756"} err="failed to get container status \"14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756\": rpc error: code = NotFound desc = could not find container \"14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756\": container with ID starting with 14776c532cd6e1134cd3183144d0a40cb4b5c3ca64da7d499ea7777e2fad9756 not found: ID does not exist" Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.744706 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kv5z6"] Jan 21 11:34:08 crc kubenswrapper[4745]: I0121 11:34:08.954106 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:34:09 crc kubenswrapper[4745]: I0121 11:34:09.603731 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5lh2g" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="registry-server" containerID="cri-o://3cbf62e8264530fa4b3c7cc71eadd333e62cbd0fbeccf485bced23756537c7fe" gracePeriod=2 Jan 21 11:34:10 crc kubenswrapper[4745]: I0121 11:34:10.016230 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" path="/var/lib/kubelet/pods/ff122cbb-4798-4fca-a61d-6f4ca070d626/volumes" Jan 21 11:34:10 crc kubenswrapper[4745]: I0121 11:34:10.617984 4745 generic.go:334] "Generic (PLEG): container finished" podID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerID="3cbf62e8264530fa4b3c7cc71eadd333e62cbd0fbeccf485bced23756537c7fe" exitCode=0 Jan 21 11:34:10 crc kubenswrapper[4745]: I0121 11:34:10.618071 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerDied","Data":"3cbf62e8264530fa4b3c7cc71eadd333e62cbd0fbeccf485bced23756537c7fe"} Jan 21 11:34:10 crc kubenswrapper[4745]: I0121 11:34:10.986992 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.070398 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities\") pod \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.070904 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlldv\" (UniqueName: \"kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv\") pod \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.070949 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content\") pod \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\" (UID: \"2a675bdd-f848-4ed5-98f0-e1065ffb031c\") " Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.072576 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities" (OuterVolumeSpecName: "utilities") pod "2a675bdd-f848-4ed5-98f0-e1065ffb031c" (UID: "2a675bdd-f848-4ed5-98f0-e1065ffb031c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.086863 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv" (OuterVolumeSpecName: "kube-api-access-rlldv") pod "2a675bdd-f848-4ed5-98f0-e1065ffb031c" (UID: "2a675bdd-f848-4ed5-98f0-e1065ffb031c"). InnerVolumeSpecName "kube-api-access-rlldv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.096156 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a675bdd-f848-4ed5-98f0-e1065ffb031c" (UID: "2a675bdd-f848-4ed5-98f0-e1065ffb031c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.172998 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlldv\" (UniqueName: \"kubernetes.io/projected/2a675bdd-f848-4ed5-98f0-e1065ffb031c-kube-api-access-rlldv\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.173033 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.173043 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a675bdd-f848-4ed5-98f0-e1065ffb031c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.634329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lh2g" event={"ID":"2a675bdd-f848-4ed5-98f0-e1065ffb031c","Type":"ContainerDied","Data":"685aefeabf8043cb4871448d2b2643b6c16b997996dcb00d90e6db1b9bc383b7"} Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.634386 4745 scope.go:117] "RemoveContainer" containerID="3cbf62e8264530fa4b3c7cc71eadd333e62cbd0fbeccf485bced23756537c7fe" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.634522 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lh2g" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.666215 4745 scope.go:117] "RemoveContainer" containerID="72702c499c017f79fc9fa7d48a39ae0d94fa2aeb2029966cae55f84fc26bf58a" Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.678115 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.687837 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lh2g"] Jan 21 11:34:11 crc kubenswrapper[4745]: I0121 11:34:11.695521 4745 scope.go:117] "RemoveContainer" containerID="f6409ec672d822a2cdff259ad730acf7a41682dd2e159001e4b1f21c915a8bec" Jan 21 11:34:12 crc kubenswrapper[4745]: I0121 11:34:12.015254 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" path="/var/lib/kubelet/pods/2a675bdd-f848-4ed5-98f0-e1065ffb031c/volumes" Jan 21 11:34:18 crc kubenswrapper[4745]: I0121 11:34:18.000964 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:34:18 crc kubenswrapper[4745]: I0121 11:34:18.717463 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457"} Jan 21 11:34:28 crc kubenswrapper[4745]: E0121 11:34:28.072402 4745 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.073s" Jan 21 11:35:22 crc kubenswrapper[4745]: I0121 11:35:22.610684 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.518014513s: [/var/lib/containers/storage/overlay/2be640cc3ebb093c5383f66ee9f64e7b9237012d792f11bf55688f34491df20b/diff /var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rgtt2_6f55bdba-45e5-485d-ae8f-a8576885b3ff/cert-manager-cainjector/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:35:22 crc kubenswrapper[4745]: I0121 11:35:22.612309 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.495024771s: [/var/lib/containers/storage/overlay/08fa76a73f825168377f44ede1470d9a8e33719293693303eb6e0f2c66d0da76/diff /var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_71bb4a3aecc4ba5b26c4b7318770ce13/kube-apiserver-check-endpoints/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.307454 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312603 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="extract-content" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312635 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="extract-content" Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312667 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312674 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312684 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="extract-utilities" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312692 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="extract-utilities" Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312702 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="extract-utilities" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312708 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="extract-utilities" Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312718 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="extract-content" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312724 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="extract-content" Jan 21 11:35:24 crc kubenswrapper[4745]: E0121 11:35:24.312760 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.312766 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.313129 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a675bdd-f848-4ed5-98f0-e1065ffb031c" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.313148 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff122cbb-4798-4fca-a61d-6f4ca070d626" containerName="registry-server" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.315503 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.509571 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.509711 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.509784 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdzvw\" (UniqueName: \"kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.611481 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.612077 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.612147 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.612176 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdzvw\" (UniqueName: \"kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.613937 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.614208 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.641757 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdzvw\" (UniqueName: \"kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw\") pod \"certified-operators-rjsnl\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:24 crc kubenswrapper[4745]: I0121 11:35:24.811427 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:25 crc kubenswrapper[4745]: I0121 11:35:25.731889 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:26 crc kubenswrapper[4745]: I0121 11:35:26.722402 4745 generic.go:334] "Generic (PLEG): container finished" podID="5fb82089-68b3-4bed-b946-791abf4fe459" containerID="b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274" exitCode=0 Jan 21 11:35:26 crc kubenswrapper[4745]: I0121 11:35:26.722884 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerDied","Data":"b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274"} Jan 21 11:35:26 crc kubenswrapper[4745]: I0121 11:35:26.723055 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerStarted","Data":"3e2e869386291b0d9e10e32e053d8a678c9f9dfcdb79af41c955ee011a6a6c4a"} Jan 21 11:35:26 crc kubenswrapper[4745]: I0121 11:35:26.730652 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:35:28 crc kubenswrapper[4745]: I0121 11:35:28.748165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerStarted","Data":"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e"} Jan 21 11:35:30 crc kubenswrapper[4745]: I0121 11:35:30.769023 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="0dd4138e-532c-446d-84ba-6bf954dfbd03" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:35:30 crc kubenswrapper[4745]: I0121 11:35:30.790075 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="0dd4138e-532c-446d-84ba-6bf954dfbd03" containerName="galera" probeResult="failure" output="command timed out" Jan 21 11:35:31 crc kubenswrapper[4745]: I0121 11:35:31.327235 4745 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:35:31 crc kubenswrapper[4745]: I0121 11:35:31.327335 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:35:31 crc kubenswrapper[4745]: I0121 11:35:31.816130 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.041924006s: [/var/lib/containers/storage/overlay/f1d69dc078a5e2b4c443916132b726b1c5437bf309118c888bc47e3e1fd0fbda/diff /var/log/pods/openstack_memcached-0_9253af27-9c32-4977-9632-266bb434fd18/memcached/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:35:31 crc kubenswrapper[4745]: I0121 11:35:31.824815 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 3.582584363s: [/var/lib/containers/storage/overlay/7389cb80ab670dfe616d607eede55b03c90a707e7732f15c41d3f66cdba16409/diff /var/log/pods/openstack_cinder-api-0_c7a564b0-2da4-4d9c-a8a2-e61604758a1f/cinder-api/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:35:31 crc kubenswrapper[4745]: E0121 11:35:31.856155 4745 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.856s" Jan 21 11:35:33 crc kubenswrapper[4745]: I0121 11:35:33.941617 4745 generic.go:334] "Generic (PLEG): container finished" podID="5fb82089-68b3-4bed-b946-791abf4fe459" containerID="0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e" exitCode=0 Jan 21 11:35:33 crc kubenswrapper[4745]: I0121 11:35:33.941646 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerDied","Data":"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e"} Jan 21 11:35:36 crc kubenswrapper[4745]: I0121 11:35:36.972014 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerStarted","Data":"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b"} Jan 21 11:35:44 crc kubenswrapper[4745]: I0121 11:35:44.812501 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:44 crc kubenswrapper[4745]: I0121 11:35:44.813151 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:44 crc kubenswrapper[4745]: I0121 11:35:44.862026 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:44 crc kubenswrapper[4745]: I0121 11:35:44.887783 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rjsnl" podStartSLOduration=13.165936084 podStartE2EDuration="20.887234968s" podCreationTimestamp="2026-01-21 11:35:24 +0000 UTC" firstStartedPulling="2026-01-21 11:35:26.724438509 +0000 UTC m=+3511.185226107" lastFinishedPulling="2026-01-21 11:35:34.445737393 +0000 UTC m=+3518.906524991" observedRunningTime="2026-01-21 11:35:37.004903703 +0000 UTC m=+3521.465691301" watchObservedRunningTime="2026-01-21 11:35:44.887234968 +0000 UTC m=+3529.348022566" Jan 21 11:35:45 crc kubenswrapper[4745]: I0121 11:35:45.119027 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:45 crc kubenswrapper[4745]: I0121 11:35:45.166650 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.061000 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rjsnl" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="registry-server" containerID="cri-o://aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b" gracePeriod=2 Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.788015 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.852331 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content\") pod \"5fb82089-68b3-4bed-b946-791abf4fe459\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.852712 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hdzvw\" (UniqueName: \"kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw\") pod \"5fb82089-68b3-4bed-b946-791abf4fe459\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.852809 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities\") pod \"5fb82089-68b3-4bed-b946-791abf4fe459\" (UID: \"5fb82089-68b3-4bed-b946-791abf4fe459\") " Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.853744 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities" (OuterVolumeSpecName: "utilities") pod "5fb82089-68b3-4bed-b946-791abf4fe459" (UID: "5fb82089-68b3-4bed-b946-791abf4fe459"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.873215 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw" (OuterVolumeSpecName: "kube-api-access-hdzvw") pod "5fb82089-68b3-4bed-b946-791abf4fe459" (UID: "5fb82089-68b3-4bed-b946-791abf4fe459"). InnerVolumeSpecName "kube-api-access-hdzvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.908402 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5fb82089-68b3-4bed-b946-791abf4fe459" (UID: "5fb82089-68b3-4bed-b946-791abf4fe459"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.954996 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.955236 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hdzvw\" (UniqueName: \"kubernetes.io/projected/5fb82089-68b3-4bed-b946-791abf4fe459-kube-api-access-hdzvw\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:47 crc kubenswrapper[4745]: I0121 11:35:47.955352 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5fb82089-68b3-4bed-b946-791abf4fe459-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.070706 4745 generic.go:334] "Generic (PLEG): container finished" podID="5fb82089-68b3-4bed-b946-791abf4fe459" containerID="aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b" exitCode=0 Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.070746 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerDied","Data":"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b"} Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.070771 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rjsnl" event={"ID":"5fb82089-68b3-4bed-b946-791abf4fe459","Type":"ContainerDied","Data":"3e2e869386291b0d9e10e32e053d8a678c9f9dfcdb79af41c955ee011a6a6c4a"} Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.070985 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rjsnl" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.071009 4745 scope.go:117] "RemoveContainer" containerID="aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.098129 4745 scope.go:117] "RemoveContainer" containerID="0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.100696 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.119425 4745 scope.go:117] "RemoveContainer" containerID="b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.129557 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rjsnl"] Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.168444 4745 scope.go:117] "RemoveContainer" containerID="aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b" Jan 21 11:35:48 crc kubenswrapper[4745]: E0121 11:35:48.169911 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b\": container with ID starting with aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b not found: ID does not exist" containerID="aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.170121 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b"} err="failed to get container status \"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b\": rpc error: code = NotFound desc = could not find container \"aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b\": container with ID starting with aba57266e64df85c1c6322fe78d8e5bb6601626b65f325f78b8d07fc625c016b not found: ID does not exist" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.170158 4745 scope.go:117] "RemoveContainer" containerID="0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e" Jan 21 11:35:48 crc kubenswrapper[4745]: E0121 11:35:48.170707 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e\": container with ID starting with 0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e not found: ID does not exist" containerID="0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.170764 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e"} err="failed to get container status \"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e\": rpc error: code = NotFound desc = could not find container \"0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e\": container with ID starting with 0ae729fdf200577c0c6473e78402ce0e10314577e2ecc96fa5161e5b4c24d75e not found: ID does not exist" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.170802 4745 scope.go:117] "RemoveContainer" containerID="b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274" Jan 21 11:35:48 crc kubenswrapper[4745]: E0121 11:35:48.171128 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274\": container with ID starting with b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274 not found: ID does not exist" containerID="b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274" Jan 21 11:35:48 crc kubenswrapper[4745]: I0121 11:35:48.171161 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274"} err="failed to get container status \"b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274\": rpc error: code = NotFound desc = could not find container \"b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274\": container with ID starting with b1d585a7f58a2cb8e668b6ad29db78ec94b85652cc947c9cd2ecc9feecc04274 not found: ID does not exist" Jan 21 11:35:50 crc kubenswrapper[4745]: I0121 11:35:50.010361 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" path="/var/lib/kubelet/pods/5fb82089-68b3-4bed-b946-791abf4fe459/volumes" Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:20.861691 4745 patch_prober.go:28] interesting pod/route-controller-manager-8f6c6688d-jbcdv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:20.862124 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-8f6c6688d-jbcdv" podUID="7c4e8d39-76b3-475f-8d61-de34c3436ffc" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:21.013051 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.907277496s: [/var/lib/containers/storage/overlay/f83d7c11d89998820f64618e99e10a842ba00c693f5e87a1a1c1078347c75d92/diff /var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:21.016776 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.760653228s: [/var/lib/containers/storage/overlay/1447be7f6e6949b4b130af3fb8e88b5a16be1eecd34c37c9dd469511b5194983/diff /var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-dhkkd_ea889c30-b820-47fa-8232-f96ed56ba8e1/multus-admission-controller/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:21.033450 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-tf44k" podUID="59cfcfcd-7ed9-4f60-85ad-fcb228dc1895" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:21.115817 4745 trace.go:236] Trace[844940769]: "Calculate volume metrics of scripts for pod openstack/horizon-5cdbfc4d4d-pm6ln" (21-Jan-2026 11:36:19.914) (total time: 1098ms): Jan 21 11:36:21 crc kubenswrapper[4745]: Trace[844940769]: [1.098010694s] [1.098010694s] END Jan 21 11:36:21 crc kubenswrapper[4745]: I0121 11:36:21.115873 4745 trace.go:236] Trace[750107757]: "Calculate volume metrics of images for pod openshift-machine-api/machine-api-operator-5694c8668f-dfzgf" (21-Jan-2026 11:36:19.831) (total time: 1196ms): Jan 21 11:36:21 crc kubenswrapper[4745]: Trace[750107757]: [1.196569417s] [1.196569417s] END Jan 21 11:36:45 crc kubenswrapper[4745]: I0121 11:36:45.866897 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:36:45 crc kubenswrapper[4745]: I0121 11:36:45.867431 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.356303 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:36:58 crc kubenswrapper[4745]: E0121 11:36:58.358256 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="extract-utilities" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.358291 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="extract-utilities" Jan 21 11:36:58 crc kubenswrapper[4745]: E0121 11:36:58.358317 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="registry-server" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.358327 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="registry-server" Jan 21 11:36:58 crc kubenswrapper[4745]: E0121 11:36:58.358356 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="extract-content" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.358364 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="extract-content" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.359281 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fb82089-68b3-4bed-b946-791abf4fe459" containerName="registry-server" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.363301 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.371571 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2hpl\" (UniqueName: \"kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.371640 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.371665 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.473676 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2hpl\" (UniqueName: \"kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.473731 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:58 crc kubenswrapper[4745]: I0121 11:36:58.473751 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:59 crc kubenswrapper[4745]: I0121 11:36:59.155735 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:59 crc kubenswrapper[4745]: I0121 11:36:59.156178 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:59 crc kubenswrapper[4745]: I0121 11:36:59.212556 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:36:59 crc kubenswrapper[4745]: I0121 11:36:59.222728 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2hpl\" (UniqueName: \"kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl\") pod \"community-operators-l67sp\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:36:59 crc kubenswrapper[4745]: I0121 11:36:59.284174 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:01 crc kubenswrapper[4745]: I0121 11:37:01.428896 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:37:02 crc kubenswrapper[4745]: I0121 11:37:02.063316 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerStarted","Data":"56c19029578134689edb7b9d32f06491916caa0b7a9c7cb2af1023f7579d7a6a"} Jan 21 11:37:02 crc kubenswrapper[4745]: I0121 11:37:02.063669 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerStarted","Data":"078b82a1bb3c9b485e03b64bdeb382b2c9b0bb741cdb58dcc7aabdf114ee96d3"} Jan 21 11:37:03 crc kubenswrapper[4745]: I0121 11:37:03.074006 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerDied","Data":"56c19029578134689edb7b9d32f06491916caa0b7a9c7cb2af1023f7579d7a6a"} Jan 21 11:37:03 crc kubenswrapper[4745]: I0121 11:37:03.074215 4745 generic.go:334] "Generic (PLEG): container finished" podID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerID="56c19029578134689edb7b9d32f06491916caa0b7a9c7cb2af1023f7579d7a6a" exitCode=0 Jan 21 11:37:06 crc kubenswrapper[4745]: I0121 11:37:06.137183 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerStarted","Data":"651a8fc5c79d46472873daa50f00ebe05457beb35c0d1a0c15d0799795f4d2cf"} Jan 21 11:37:11 crc kubenswrapper[4745]: I0121 11:37:11.344403 4745 generic.go:334] "Generic (PLEG): container finished" podID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerID="651a8fc5c79d46472873daa50f00ebe05457beb35c0d1a0c15d0799795f4d2cf" exitCode=0 Jan 21 11:37:11 crc kubenswrapper[4745]: I0121 11:37:11.344645 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerDied","Data":"651a8fc5c79d46472873daa50f00ebe05457beb35c0d1a0c15d0799795f4d2cf"} Jan 21 11:37:13 crc kubenswrapper[4745]: I0121 11:37:13.363152 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerStarted","Data":"bd69ecf55fbe4cd904cd16817d311aed4026955261da691a81f7050db00631eb"} Jan 21 11:37:13 crc kubenswrapper[4745]: I0121 11:37:13.395182 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-l67sp" podStartSLOduration=6.174751173 podStartE2EDuration="15.394519054s" podCreationTimestamp="2026-01-21 11:36:58 +0000 UTC" firstStartedPulling="2026-01-21 11:37:03.075705631 +0000 UTC m=+3607.536493249" lastFinishedPulling="2026-01-21 11:37:12.295473532 +0000 UTC m=+3616.756261130" observedRunningTime="2026-01-21 11:37:13.384957415 +0000 UTC m=+3617.845745023" watchObservedRunningTime="2026-01-21 11:37:13.394519054 +0000 UTC m=+3617.855306652" Jan 21 11:37:15 crc kubenswrapper[4745]: I0121 11:37:15.866470 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:37:15 crc kubenswrapper[4745]: I0121 11:37:15.867203 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:37:19 crc kubenswrapper[4745]: I0121 11:37:19.285189 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:19 crc kubenswrapper[4745]: I0121 11:37:19.285775 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:19 crc kubenswrapper[4745]: I0121 11:37:19.335902 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:19 crc kubenswrapper[4745]: I0121 11:37:19.461202 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:19 crc kubenswrapper[4745]: I0121 11:37:19.582480 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:37:21 crc kubenswrapper[4745]: I0121 11:37:21.511337 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-l67sp" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="registry-server" containerID="cri-o://bd69ecf55fbe4cd904cd16817d311aed4026955261da691a81f7050db00631eb" gracePeriod=2 Jan 21 11:37:22 crc kubenswrapper[4745]: I0121 11:37:22.438051 4745 generic.go:334] "Generic (PLEG): container finished" podID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerID="bd69ecf55fbe4cd904cd16817d311aed4026955261da691a81f7050db00631eb" exitCode=0 Jan 21 11:37:22 crc kubenswrapper[4745]: I0121 11:37:22.438147 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerDied","Data":"bd69ecf55fbe4cd904cd16817d311aed4026955261da691a81f7050db00631eb"} Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.432489 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.450437 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-l67sp" event={"ID":"dc63365e-5036-4f03-a0ab-fee15cb8b88d","Type":"ContainerDied","Data":"078b82a1bb3c9b485e03b64bdeb382b2c9b0bb741cdb58dcc7aabdf114ee96d3"} Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.450503 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-l67sp" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.450843 4745 scope.go:117] "RemoveContainer" containerID="bd69ecf55fbe4cd904cd16817d311aed4026955261da691a81f7050db00631eb" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.500925 4745 scope.go:117] "RemoveContainer" containerID="651a8fc5c79d46472873daa50f00ebe05457beb35c0d1a0c15d0799795f4d2cf" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.532165 4745 scope.go:117] "RemoveContainer" containerID="56c19029578134689edb7b9d32f06491916caa0b7a9c7cb2af1023f7579d7a6a" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.564028 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2hpl\" (UniqueName: \"kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl\") pod \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.564147 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities\") pod \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.564414 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content\") pod \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\" (UID: \"dc63365e-5036-4f03-a0ab-fee15cb8b88d\") " Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.566041 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities" (OuterVolumeSpecName: "utilities") pod "dc63365e-5036-4f03-a0ab-fee15cb8b88d" (UID: "dc63365e-5036-4f03-a0ab-fee15cb8b88d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.576110 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl" (OuterVolumeSpecName: "kube-api-access-j2hpl") pod "dc63365e-5036-4f03-a0ab-fee15cb8b88d" (UID: "dc63365e-5036-4f03-a0ab-fee15cb8b88d"). InnerVolumeSpecName "kube-api-access-j2hpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.634170 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc63365e-5036-4f03-a0ab-fee15cb8b88d" (UID: "dc63365e-5036-4f03-a0ab-fee15cb8b88d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.666912 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.666975 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2hpl\" (UniqueName: \"kubernetes.io/projected/dc63365e-5036-4f03-a0ab-fee15cb8b88d-kube-api-access-j2hpl\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.667020 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc63365e-5036-4f03-a0ab-fee15cb8b88d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.789375 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:37:23 crc kubenswrapper[4745]: I0121 11:37:23.798578 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-l67sp"] Jan 21 11:37:24 crc kubenswrapper[4745]: I0121 11:37:24.014638 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" path="/var/lib/kubelet/pods/dc63365e-5036-4f03-a0ab-fee15cb8b88d/volumes" Jan 21 11:37:43 crc kubenswrapper[4745]: I0121 11:37:43.826075 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6nzgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:37:43 crc kubenswrapper[4745]: I0121 11:37:43.829526 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6nzgh container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:37:43 crc kubenswrapper[4745]: I0121 11:37:43.829887 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" podUID="5d25df07-ad4c-4a02-bd0b-241e69a4f0f4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 11:37:43 crc kubenswrapper[4745]: I0121 11:37:43.830149 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" podUID="5d25df07-ad4c-4a02-bd0b-241e69a4f0f4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:37:45 crc kubenswrapper[4745]: I0121 11:37:45.866415 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:37:45 crc kubenswrapper[4745]: I0121 11:37:45.866959 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:37:45 crc kubenswrapper[4745]: I0121 11:37:45.867032 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:37:45 crc kubenswrapper[4745]: I0121 11:37:45.868325 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:37:45 crc kubenswrapper[4745]: I0121 11:37:45.868414 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457" gracePeriod=600 Jan 21 11:37:46 crc kubenswrapper[4745]: E0121 11:37:46.130422 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8abb3db_dbf8_4568_a6dc_c88674d222b1.slice/crio-3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8abb3db_dbf8_4568_a6dc_c88674d222b1.slice/crio-conmon-3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457.scope\": RecentStats: unable to find data in memory cache]" Jan 21 11:37:46 crc kubenswrapper[4745]: I0121 11:37:46.651684 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457" exitCode=0 Jan 21 11:37:46 crc kubenswrapper[4745]: I0121 11:37:46.651834 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457"} Jan 21 11:37:46 crc kubenswrapper[4745]: I0121 11:37:46.652025 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6"} Jan 21 11:37:46 crc kubenswrapper[4745]: I0121 11:37:46.652051 4745 scope.go:117] "RemoveContainer" containerID="3ab90559cf09fa7d1808191d67ecbbff24e2fdd724b01da4f469d21299779929" Jan 21 11:37:55 crc kubenswrapper[4745]: I0121 11:37:55.961772 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:37:56 crc kubenswrapper[4745]: I0121 11:37:56.012893 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:37:56 crc kubenswrapper[4745]: I0121 11:37:56.067431 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:37:56 crc kubenswrapper[4745]: I0121 11:37:56.070636 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-gwvtn" podUID="fe3c7d57-12a7-426c-8c02-fe7f24949bae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:37:56 crc kubenswrapper[4745]: I0121 11:37:56.580644 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.124603192s: [/var/lib/containers/storage/overlay/d31ffd77dd14d54020dfa739fff5b38e7b384eca907da70a4de29f3bf980c14a/diff /var/log/pods/openstack_horizon-5cdbfc4d4d-pm6ln_1b30531d-e957-4efd-b09c-d5d0b5fd1382/horizon/3.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:38:03 crc kubenswrapper[4745]: I0121 11:38:03.688603 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.222958464s: [/var/lib/containers/storage/overlay/fe7e233c61da5a2730a4cd2c7aac2473ef07a878b3726df9d9c7afd9df26631e/diff /var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-x9mpf_42c37f0d-415a-4a72-ae98-07551477c6cc/manager/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:39:10 crc kubenswrapper[4745]: I0121 11:39:09.192875 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-78d57d4fdd-dxmll" podUID="8ed49bb1-d169-4518-b064-3fb35fd1bad0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:23 crc kubenswrapper[4745]: I0121 11:39:22.755769 4745 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6nzgh container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:23 crc kubenswrapper[4745]: I0121 11:39:22.756409 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6nzgh" podUID="5d25df07-ad4c-4a02-bd0b-241e69a4f0f4" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.376910 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.417949 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.721181 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.721256 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.722727 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:35.722761 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.420243 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Readiness probe status=failure output="" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.461740 4745 patch_prober.go:28] interesting pod/downloads-7954f5f757-gwvtn container/download-server namespace/openshift-console: Liveness probe status=failure output="" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.672980 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5ft4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.673046 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" podUID="f9c06282-abf7-4d46-90df-6d48394448cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:36 crc kubenswrapper[4745]: E0121 11:39:36.674516 4745 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.675s" Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.685888 4745 trace.go:236] Trace[889821032]: "Calculate volume metrics of registry-certificates for pod openshift-image-registry/image-registry-66df7c8f76-4c4t9" (21-Jan-2026 11:39:33.623) (total time: 3050ms): Jan 21 11:39:36 crc kubenswrapper[4745]: Trace[889821032]: [3.050277399s] [3.050277399s] END Jan 21 11:39:36 crc kubenswrapper[4745]: I0121 11:39:36.766761 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.743047561s: [/var/lib/containers/storage/overlay/0757f4374bf75b9d1f5016fcc42a30d3fc9aa5c3acc9aca8f0161d745e273c9c/diff /var/log/pods/openstack_ceilometer-0_a3f51f01-ad12-40ab-a599-bca8a2eb5cec/ceilometer-central-agent/0.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:45.377596 4745 patch_prober.go:28] interesting pod/console-operator-58897d9998-9dn2q container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.16:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:45.378308 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-9dn2q" podUID="28428682-3f1f-4077-887e-f1570b385a8c" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.16:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:45.763886 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:45.763945 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:47.785524 4745 trace.go:236] Trace[294184544]: "Calculate volume metrics of scripts for pod openstack/ovn-controller-ovs-xs6fp" (21-Jan-2026 11:39:46.084) (total time: 1700ms): Jan 21 11:39:47 crc kubenswrapper[4745]: Trace[294184544]: [1.700714863s] [1.700714863s] END Jan 21 11:39:47 crc kubenswrapper[4745]: I0121 11:39:47.844515 4745 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.712090632s: [/var/lib/containers/storage/overlay/7124166624ea6577b5f847399769e59bec986b551664c63b0eb31439e436aaff/diff /var/log/pods/openshift-console_downloads-7954f5f757-gwvtn_fe3c7d57-12a7-426c-8c02-fe7f24949bae/download-server/1.log]; will not log again for this container unless duration exceeds 2s Jan 21 11:40:15 crc kubenswrapper[4745]: I0121 11:40:15.866795 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:40:15 crc kubenswrapper[4745]: I0121 11:40:15.867680 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:40:45 crc kubenswrapper[4745]: I0121 11:40:45.866382 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:40:45 crc kubenswrapper[4745]: I0121 11:40:45.866967 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:41:15 crc kubenswrapper[4745]: I0121 11:41:15.866224 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:41:15 crc kubenswrapper[4745]: I0121 11:41:15.866628 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:41:15 crc kubenswrapper[4745]: I0121 11:41:15.866667 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:41:15 crc kubenswrapper[4745]: I0121 11:41:15.868070 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:41:15 crc kubenswrapper[4745]: I0121 11:41:15.868491 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" gracePeriod=600 Jan 21 11:41:16 crc kubenswrapper[4745]: E0121 11:41:16.499039 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:41:16 crc kubenswrapper[4745]: I0121 11:41:16.653640 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" exitCode=0 Jan 21 11:41:16 crc kubenswrapper[4745]: I0121 11:41:16.653727 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6"} Jan 21 11:41:16 crc kubenswrapper[4745]: I0121 11:41:16.654270 4745 scope.go:117] "RemoveContainer" containerID="3d6b7250c5e47a5dda26aa19ad12f3dae68ad93f127c10ee247c84ab3a515457" Jan 21 11:41:16 crc kubenswrapper[4745]: I0121 11:41:16.654956 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:41:16 crc kubenswrapper[4745]: E0121 11:41:16.655205 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:41:31 crc kubenswrapper[4745]: I0121 11:41:31.001088 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:41:31 crc kubenswrapper[4745]: E0121 11:41:31.001860 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:41:46 crc kubenswrapper[4745]: I0121 11:41:46.012574 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:41:46 crc kubenswrapper[4745]: E0121 11:41:46.018418 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:41:59 crc kubenswrapper[4745]: I0121 11:41:59.001394 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:41:59 crc kubenswrapper[4745]: E0121 11:41:59.002278 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:42:14 crc kubenswrapper[4745]: I0121 11:42:14.000596 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:42:14 crc kubenswrapper[4745]: E0121 11:42:14.001424 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:42:29 crc kubenswrapper[4745]: I0121 11:42:29.001757 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:42:29 crc kubenswrapper[4745]: E0121 11:42:29.003039 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:42:43 crc kubenswrapper[4745]: I0121 11:42:43.000499 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:42:43 crc kubenswrapper[4745]: E0121 11:42:43.001404 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:42:54 crc kubenswrapper[4745]: I0121 11:42:54.001136 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:42:54 crc kubenswrapper[4745]: E0121 11:42:54.001977 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:43:06 crc kubenswrapper[4745]: I0121 11:43:06.008443 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:43:06 crc kubenswrapper[4745]: E0121 11:43:06.009306 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:43:17 crc kubenswrapper[4745]: I0121 11:43:17.000881 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:43:17 crc kubenswrapper[4745]: E0121 11:43:17.001800 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.002314 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:43:30 crc kubenswrapper[4745]: E0121 11:43:30.003428 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.546933 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:43:30 crc kubenswrapper[4745]: E0121 11:43:30.551047 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="registry-server" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.551075 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="registry-server" Jan 21 11:43:30 crc kubenswrapper[4745]: E0121 11:43:30.551116 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="extract-content" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.551128 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="extract-content" Jan 21 11:43:30 crc kubenswrapper[4745]: E0121 11:43:30.551151 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="extract-utilities" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.551159 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="extract-utilities" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.551562 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc63365e-5036-4f03-a0ab-fee15cb8b88d" containerName="registry-server" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.554701 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.634712 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmt76\" (UniqueName: \"kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.634817 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.634997 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.673657 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.736317 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.736451 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.736497 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmt76\" (UniqueName: \"kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.739620 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.740585 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.788445 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmt76\" (UniqueName: \"kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76\") pod \"redhat-operators-spgz4\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:30 crc kubenswrapper[4745]: I0121 11:43:30.874405 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:31 crc kubenswrapper[4745]: I0121 11:43:31.657670 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:43:32 crc kubenswrapper[4745]: I0121 11:43:32.111232 4745 generic.go:334] "Generic (PLEG): container finished" podID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerID="f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db" exitCode=0 Jan 21 11:43:32 crc kubenswrapper[4745]: I0121 11:43:32.111270 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerDied","Data":"f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db"} Jan 21 11:43:32 crc kubenswrapper[4745]: I0121 11:43:32.111596 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerStarted","Data":"4997b6370adcc076c49a52b880ab761a751d2b95db43294a31ccfce56b5d24ac"} Jan 21 11:43:32 crc kubenswrapper[4745]: I0121 11:43:32.115152 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:43:33 crc kubenswrapper[4745]: I0121 11:43:33.121914 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerStarted","Data":"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba"} Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.011516 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-sqhft" podUID="784904b1-a1d9-4319-be67-34e3dfdc1c9a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.62:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.032143 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podUID="a96f3189-7bbc-404d-ad6d-05b8fefb65fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.076153 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podUID="c0985a55-6ede-4214-87fe-27cb5668dd86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.076752 4745 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-szgtz container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.077310 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" podUID="f5752ba7-8465-4a19-b7a3-d2b4effe5f23" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.109599 4745 trace.go:236] Trace[223656182]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/redhat-operators-2q52q" (21-Jan-2026 11:43:35.814) (total time: 1294ms): Jan 21 11:43:37 crc kubenswrapper[4745]: Trace[223656182]: [1.29472995s] [1.29472995s] END Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.118751 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-9f9vp" podUID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.118770 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" podUID="1be9da42-8db6-47b9-b7ec-788b04db264d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.119053 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" podUID="8381ff45-ae46-437a-894e-1530d39397f8" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:37 crc kubenswrapper[4745]: I0121 11:43:37.119569 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 11:43:39 crc kubenswrapper[4745]: I0121 11:43:39.173268 4745 generic.go:334] "Generic (PLEG): container finished" podID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerID="ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba" exitCode=0 Jan 21 11:43:39 crc kubenswrapper[4745]: I0121 11:43:39.173392 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerDied","Data":"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba"} Jan 21 11:43:40 crc kubenswrapper[4745]: I0121 11:43:40.182968 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerStarted","Data":"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe"} Jan 21 11:43:40 crc kubenswrapper[4745]: I0121 11:43:40.207989 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spgz4" podStartSLOduration=2.696497031 podStartE2EDuration="10.207656085s" podCreationTimestamp="2026-01-21 11:43:30 +0000 UTC" firstStartedPulling="2026-01-21 11:43:32.113437044 +0000 UTC m=+3996.574224642" lastFinishedPulling="2026-01-21 11:43:39.624596098 +0000 UTC m=+4004.085383696" observedRunningTime="2026-01-21 11:43:40.206282977 +0000 UTC m=+4004.667070575" watchObservedRunningTime="2026-01-21 11:43:40.207656085 +0000 UTC m=+4004.668443683" Jan 21 11:43:40 crc kubenswrapper[4745]: I0121 11:43:40.877103 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:40 crc kubenswrapper[4745]: I0121 11:43:40.877165 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:43:41 crc kubenswrapper[4745]: I0121 11:43:41.928793 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spgz4" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" probeResult="failure" output=< Jan 21 11:43:41 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:43:41 crc kubenswrapper[4745]: > Jan 21 11:43:43 crc kubenswrapper[4745]: I0121 11:43:43.000601 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:43:43 crc kubenswrapper[4745]: E0121 11:43:43.001071 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:43:51 crc kubenswrapper[4745]: I0121 11:43:51.944740 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spgz4" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" probeResult="failure" output=< Jan 21 11:43:51 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:43:51 crc kubenswrapper[4745]: > Jan 21 11:43:57 crc kubenswrapper[4745]: I0121 11:43:57.000136 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:43:57 crc kubenswrapper[4745]: E0121 11:43:57.001277 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:44:00 crc kubenswrapper[4745]: I0121 11:44:00.948268 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:44:01 crc kubenswrapper[4745]: I0121 11:44:01.036636 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:44:01 crc kubenswrapper[4745]: I0121 11:44:01.739422 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:44:02 crc kubenswrapper[4745]: I0121 11:44:02.381062 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spgz4" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" containerID="cri-o://465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe" gracePeriod=2 Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.135177 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.272660 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities\") pod \"d5880f0a-e6d5-40ef-8e2c-f14943028947\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.272846 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content\") pod \"d5880f0a-e6d5-40ef-8e2c-f14943028947\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.272952 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmt76\" (UniqueName: \"kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76\") pod \"d5880f0a-e6d5-40ef-8e2c-f14943028947\" (UID: \"d5880f0a-e6d5-40ef-8e2c-f14943028947\") " Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.274083 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities" (OuterVolumeSpecName: "utilities") pod "d5880f0a-e6d5-40ef-8e2c-f14943028947" (UID: "d5880f0a-e6d5-40ef-8e2c-f14943028947"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.285000 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76" (OuterVolumeSpecName: "kube-api-access-lmt76") pod "d5880f0a-e6d5-40ef-8e2c-f14943028947" (UID: "d5880f0a-e6d5-40ef-8e2c-f14943028947"). InnerVolumeSpecName "kube-api-access-lmt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.375799 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.375832 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmt76\" (UniqueName: \"kubernetes.io/projected/d5880f0a-e6d5-40ef-8e2c-f14943028947-kube-api-access-lmt76\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.390560 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerDied","Data":"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe"} Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.390635 4745 scope.go:117] "RemoveContainer" containerID="465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.390668 4745 generic.go:334] "Generic (PLEG): container finished" podID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerID="465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe" exitCode=0 Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.390795 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spgz4" event={"ID":"d5880f0a-e6d5-40ef-8e2c-f14943028947","Type":"ContainerDied","Data":"4997b6370adcc076c49a52b880ab761a751d2b95db43294a31ccfce56b5d24ac"} Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.391913 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spgz4" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.398868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5880f0a-e6d5-40ef-8e2c-f14943028947" (UID: "d5880f0a-e6d5-40ef-8e2c-f14943028947"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.412597 4745 scope.go:117] "RemoveContainer" containerID="ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.436825 4745 scope.go:117] "RemoveContainer" containerID="f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.477444 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5880f0a-e6d5-40ef-8e2c-f14943028947-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.480581 4745 scope.go:117] "RemoveContainer" containerID="465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe" Jan 21 11:44:03 crc kubenswrapper[4745]: E0121 11:44:03.481572 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe\": container with ID starting with 465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe not found: ID does not exist" containerID="465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.481662 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe"} err="failed to get container status \"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe\": rpc error: code = NotFound desc = could not find container \"465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe\": container with ID starting with 465a4b4586ab8b583afa3a2df2f2553d950b4bd257dab28831c89df4321e7abe not found: ID does not exist" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.481699 4745 scope.go:117] "RemoveContainer" containerID="ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba" Jan 21 11:44:03 crc kubenswrapper[4745]: E0121 11:44:03.482165 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba\": container with ID starting with ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba not found: ID does not exist" containerID="ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.482202 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba"} err="failed to get container status \"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba\": rpc error: code = NotFound desc = could not find container \"ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba\": container with ID starting with ace5d211a83f957e92f0125755c83715ff1ea4d8655ee62fadbca398802e91ba not found: ID does not exist" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.482222 4745 scope.go:117] "RemoveContainer" containerID="f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db" Jan 21 11:44:03 crc kubenswrapper[4745]: E0121 11:44:03.482442 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db\": container with ID starting with f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db not found: ID does not exist" containerID="f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.482462 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db"} err="failed to get container status \"f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db\": rpc error: code = NotFound desc = could not find container \"f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db\": container with ID starting with f9718b60c5c66d19344e3b96857a8463929a662c023de87bb8b9decb5a6800db not found: ID does not exist" Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.729702 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:44:03 crc kubenswrapper[4745]: I0121 11:44:03.737716 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spgz4"] Jan 21 11:44:04 crc kubenswrapper[4745]: I0121 11:44:04.016014 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" path="/var/lib/kubelet/pods/d5880f0a-e6d5-40ef-8e2c-f14943028947/volumes" Jan 21 11:44:12 crc kubenswrapper[4745]: I0121 11:44:12.005137 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:44:12 crc kubenswrapper[4745]: E0121 11:44:12.005941 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:44:23 crc kubenswrapper[4745]: I0121 11:44:23.000515 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:44:23 crc kubenswrapper[4745]: E0121 11:44:23.001168 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.058678 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:26 crc kubenswrapper[4745]: E0121 11:44:26.060822 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="extract-content" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.060851 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="extract-content" Jan 21 11:44:26 crc kubenswrapper[4745]: E0121 11:44:26.060884 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="extract-utilities" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.060891 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="extract-utilities" Jan 21 11:44:26 crc kubenswrapper[4745]: E0121 11:44:26.060915 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.060921 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.061343 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5880f0a-e6d5-40ef-8e2c-f14943028947" containerName="registry-server" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.063387 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.093513 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.217574 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.217728 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.217922 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g5bf\" (UniqueName: \"kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.319798 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7g5bf\" (UniqueName: \"kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.319900 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.319942 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.320439 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.320476 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.337667 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7g5bf\" (UniqueName: \"kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf\") pod \"redhat-marketplace-6dmm8\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.397143 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:26 crc kubenswrapper[4745]: I0121 11:44:26.987716 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:27 crc kubenswrapper[4745]: I0121 11:44:27.614108 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerStarted","Data":"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3"} Jan 21 11:44:27 crc kubenswrapper[4745]: I0121 11:44:27.615519 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerStarted","Data":"caf39deaaf4cc57e6ece6d59d72d8cdbeacb93b9260ab4f8414bedf78392d5b7"} Jan 21 11:44:28 crc kubenswrapper[4745]: I0121 11:44:28.623827 4745 generic.go:334] "Generic (PLEG): container finished" podID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerID="bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3" exitCode=0 Jan 21 11:44:28 crc kubenswrapper[4745]: I0121 11:44:28.623922 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerDied","Data":"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3"} Jan 21 11:44:30 crc kubenswrapper[4745]: I0121 11:44:30.640186 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerStarted","Data":"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85"} Jan 21 11:44:31 crc kubenswrapper[4745]: I0121 11:44:31.651606 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerDied","Data":"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85"} Jan 21 11:44:31 crc kubenswrapper[4745]: I0121 11:44:31.651424 4745 generic.go:334] "Generic (PLEG): container finished" podID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerID="81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85" exitCode=0 Jan 21 11:44:33 crc kubenswrapper[4745]: I0121 11:44:33.677813 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerStarted","Data":"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687"} Jan 21 11:44:33 crc kubenswrapper[4745]: I0121 11:44:33.717106 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dmm8" podStartSLOduration=3.60727144 podStartE2EDuration="7.716637579s" podCreationTimestamp="2026-01-21 11:44:26 +0000 UTC" firstStartedPulling="2026-01-21 11:44:28.628434568 +0000 UTC m=+4053.089222156" lastFinishedPulling="2026-01-21 11:44:32.737800697 +0000 UTC m=+4057.198588295" observedRunningTime="2026-01-21 11:44:33.707655294 +0000 UTC m=+4058.168442892" watchObservedRunningTime="2026-01-21 11:44:33.716637579 +0000 UTC m=+4058.177425177" Jan 21 11:44:36 crc kubenswrapper[4745]: I0121 11:44:36.006803 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:44:36 crc kubenswrapper[4745]: E0121 11:44:36.007593 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:44:36 crc kubenswrapper[4745]: I0121 11:44:36.397339 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:36 crc kubenswrapper[4745]: I0121 11:44:36.397467 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:37 crc kubenswrapper[4745]: I0121 11:44:37.502783 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6dmm8" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="registry-server" probeResult="failure" output=< Jan 21 11:44:37 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:44:37 crc kubenswrapper[4745]: > Jan 21 11:44:46 crc kubenswrapper[4745]: I0121 11:44:46.451388 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:46 crc kubenswrapper[4745]: I0121 11:44:46.504980 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:46 crc kubenswrapper[4745]: I0121 11:44:46.700254 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:47 crc kubenswrapper[4745]: I0121 11:44:47.810894 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dmm8" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="registry-server" containerID="cri-o://557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687" gracePeriod=2 Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.000776 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:44:48 crc kubenswrapper[4745]: E0121 11:44:48.001329 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.395401 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.405509 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7g5bf\" (UniqueName: \"kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf\") pod \"c4e185a4-8ef7-421b-90d6-61149cf902d9\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.405605 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content\") pod \"c4e185a4-8ef7-421b-90d6-61149cf902d9\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.405642 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities\") pod \"c4e185a4-8ef7-421b-90d6-61149cf902d9\" (UID: \"c4e185a4-8ef7-421b-90d6-61149cf902d9\") " Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.407149 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities" (OuterVolumeSpecName: "utilities") pod "c4e185a4-8ef7-421b-90d6-61149cf902d9" (UID: "c4e185a4-8ef7-421b-90d6-61149cf902d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.413673 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf" (OuterVolumeSpecName: "kube-api-access-7g5bf") pod "c4e185a4-8ef7-421b-90d6-61149cf902d9" (UID: "c4e185a4-8ef7-421b-90d6-61149cf902d9"). InnerVolumeSpecName "kube-api-access-7g5bf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.444952 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4e185a4-8ef7-421b-90d6-61149cf902d9" (UID: "c4e185a4-8ef7-421b-90d6-61149cf902d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.507257 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7g5bf\" (UniqueName: \"kubernetes.io/projected/c4e185a4-8ef7-421b-90d6-61149cf902d9-kube-api-access-7g5bf\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.507316 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.507330 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4e185a4-8ef7-421b-90d6-61149cf902d9-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.821070 4745 generic.go:334] "Generic (PLEG): container finished" podID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerID="557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687" exitCode=0 Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.821119 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerDied","Data":"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687"} Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.821145 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dmm8" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.821167 4745 scope.go:117] "RemoveContainer" containerID="557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.821151 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dmm8" event={"ID":"c4e185a4-8ef7-421b-90d6-61149cf902d9","Type":"ContainerDied","Data":"caf39deaaf4cc57e6ece6d59d72d8cdbeacb93b9260ab4f8414bedf78392d5b7"} Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.853610 4745 scope.go:117] "RemoveContainer" containerID="81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.875523 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.882579 4745 scope.go:117] "RemoveContainer" containerID="bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.886084 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dmm8"] Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.934053 4745 scope.go:117] "RemoveContainer" containerID="557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687" Jan 21 11:44:48 crc kubenswrapper[4745]: E0121 11:44:48.934559 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687\": container with ID starting with 557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687 not found: ID does not exist" containerID="557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.934592 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687"} err="failed to get container status \"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687\": rpc error: code = NotFound desc = could not find container \"557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687\": container with ID starting with 557cc1604e2a83e9b7fe3f3929c7c8755ec1326dbbc56668acf2121efe161687 not found: ID does not exist" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.934611 4745 scope.go:117] "RemoveContainer" containerID="81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85" Jan 21 11:44:48 crc kubenswrapper[4745]: E0121 11:44:48.935127 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85\": container with ID starting with 81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85 not found: ID does not exist" containerID="81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.935169 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85"} err="failed to get container status \"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85\": rpc error: code = NotFound desc = could not find container \"81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85\": container with ID starting with 81882f37dd3a7bc18a55f26e0b2fcd2a2c12211c7862b898725fe95650d2dc85 not found: ID does not exist" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.935198 4745 scope.go:117] "RemoveContainer" containerID="bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3" Jan 21 11:44:48 crc kubenswrapper[4745]: E0121 11:44:48.935612 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3\": container with ID starting with bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3 not found: ID does not exist" containerID="bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3" Jan 21 11:44:48 crc kubenswrapper[4745]: I0121 11:44:48.935639 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3"} err="failed to get container status \"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3\": rpc error: code = NotFound desc = could not find container \"bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3\": container with ID starting with bb2f72bc0386c95323fd526a700d8893b5dc6415849e99d7413b85d9e2abaac3 not found: ID does not exist" Jan 21 11:44:50 crc kubenswrapper[4745]: I0121 11:44:50.013113 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" path="/var/lib/kubelet/pods/c4e185a4-8ef7-421b-90d6-61149cf902d9/volumes" Jan 21 11:44:59 crc kubenswrapper[4745]: I0121 11:44:59.000499 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:44:59 crc kubenswrapper[4745]: E0121 11:44:59.001299 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.236678 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j"] Jan 21 11:45:00 crc kubenswrapper[4745]: E0121 11:45:00.237182 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.237198 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="extract-content" Jan 21 11:45:00 crc kubenswrapper[4745]: E0121 11:45:00.237237 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.237251 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="extract-utilities" Jan 21 11:45:00 crc kubenswrapper[4745]: E0121 11:45:00.237276 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.237284 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.237502 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4e185a4-8ef7-421b-90d6-61149cf902d9" containerName="registry-server" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.238189 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.243640 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.243639 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.249647 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j"] Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.280325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsmrz\" (UniqueName: \"kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.280377 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.280459 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.381420 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsmrz\" (UniqueName: \"kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.381465 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.381523 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.382919 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.396509 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.405732 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsmrz\" (UniqueName: \"kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz\") pod \"collect-profiles-29483265-f8m4j\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:00 crc kubenswrapper[4745]: I0121 11:45:00.588737 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:01 crc kubenswrapper[4745]: I0121 11:45:01.177685 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j"] Jan 21 11:45:01 crc kubenswrapper[4745]: I0121 11:45:01.930836 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" event={"ID":"8d229af8-05fa-419b-ac5f-7b6ff269389b","Type":"ContainerStarted","Data":"a00abd929c48c6104657fa8e2c149baa5214443e1ff37067c163baaaefe5c819"} Jan 21 11:45:02 crc kubenswrapper[4745]: I0121 11:45:02.941984 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" event={"ID":"8d229af8-05fa-419b-ac5f-7b6ff269389b","Type":"ContainerStarted","Data":"5a8273d289ac9fe308464452031f92bd605a18e31d70abd1650de14659af30fc"} Jan 21 11:45:02 crc kubenswrapper[4745]: I0121 11:45:02.963392 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" podStartSLOduration=2.963372284 podStartE2EDuration="2.963372284s" podCreationTimestamp="2026-01-21 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:45:02.9558703 +0000 UTC m=+4087.416657908" watchObservedRunningTime="2026-01-21 11:45:02.963372284 +0000 UTC m=+4087.424159882" Jan 21 11:45:03 crc kubenswrapper[4745]: I0121 11:45:03.952399 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d229af8-05fa-419b-ac5f-7b6ff269389b" containerID="5a8273d289ac9fe308464452031f92bd605a18e31d70abd1650de14659af30fc" exitCode=0 Jan 21 11:45:03 crc kubenswrapper[4745]: I0121 11:45:03.952810 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" event={"ID":"8d229af8-05fa-419b-ac5f-7b6ff269389b","Type":"ContainerDied","Data":"5a8273d289ac9fe308464452031f92bd605a18e31d70abd1650de14659af30fc"} Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.624943 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.687360 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume\") pod \"8d229af8-05fa-419b-ac5f-7b6ff269389b\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.687466 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume\") pod \"8d229af8-05fa-419b-ac5f-7b6ff269389b\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.687507 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lsmrz\" (UniqueName: \"kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz\") pod \"8d229af8-05fa-419b-ac5f-7b6ff269389b\" (UID: \"8d229af8-05fa-419b-ac5f-7b6ff269389b\") " Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.688108 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume" (OuterVolumeSpecName: "config-volume") pod "8d229af8-05fa-419b-ac5f-7b6ff269389b" (UID: "8d229af8-05fa-419b-ac5f-7b6ff269389b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.693476 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8d229af8-05fa-419b-ac5f-7b6ff269389b" (UID: "8d229af8-05fa-419b-ac5f-7b6ff269389b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.705873 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz" (OuterVolumeSpecName: "kube-api-access-lsmrz") pod "8d229af8-05fa-419b-ac5f-7b6ff269389b" (UID: "8d229af8-05fa-419b-ac5f-7b6ff269389b"). InnerVolumeSpecName "kube-api-access-lsmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.789180 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8d229af8-05fa-419b-ac5f-7b6ff269389b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.789221 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lsmrz\" (UniqueName: \"kubernetes.io/projected/8d229af8-05fa-419b-ac5f-7b6ff269389b-kube-api-access-lsmrz\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.789233 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8d229af8-05fa-419b-ac5f-7b6ff269389b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.973096 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" event={"ID":"8d229af8-05fa-419b-ac5f-7b6ff269389b","Type":"ContainerDied","Data":"a00abd929c48c6104657fa8e2c149baa5214443e1ff37067c163baaaefe5c819"} Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.973161 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a00abd929c48c6104657fa8e2c149baa5214443e1ff37067c163baaaefe5c819" Jan 21 11:45:05 crc kubenswrapper[4745]: I0121 11:45:05.973223 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j" Jan 21 11:45:06 crc kubenswrapper[4745]: I0121 11:45:06.075822 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q"] Jan 21 11:45:06 crc kubenswrapper[4745]: I0121 11:45:06.084378 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483220-wzj8q"] Jan 21 11:45:08 crc kubenswrapper[4745]: I0121 11:45:08.011182 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10308ebf-7e98-40cf-ae85-cdda215f5849" path="/var/lib/kubelet/pods/10308ebf-7e98-40cf-ae85-cdda215f5849/volumes" Jan 21 11:45:10 crc kubenswrapper[4745]: I0121 11:45:10.000227 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:45:10 crc kubenswrapper[4745]: E0121 11:45:10.000758 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:45:23 crc kubenswrapper[4745]: I0121 11:45:23.001246 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:45:23 crc kubenswrapper[4745]: E0121 11:45:23.002090 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.447290 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:45:34 crc kubenswrapper[4745]: E0121 11:45:34.448872 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d229af8-05fa-419b-ac5f-7b6ff269389b" containerName="collect-profiles" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.448891 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d229af8-05fa-419b-ac5f-7b6ff269389b" containerName="collect-profiles" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.449155 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d229af8-05fa-419b-ac5f-7b6ff269389b" containerName="collect-profiles" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.451426 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.460880 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.575889 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzxj\" (UniqueName: \"kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.576021 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.576083 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.683341 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.683790 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.684006 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.684180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzxj\" (UniqueName: \"kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.684196 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.712843 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzxj\" (UniqueName: \"kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj\") pod \"certified-operators-29cdf\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:34 crc kubenswrapper[4745]: I0121 11:45:34.778642 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:35 crc kubenswrapper[4745]: I0121 11:45:35.340098 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:45:36 crc kubenswrapper[4745]: I0121 11:45:36.011010 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:45:36 crc kubenswrapper[4745]: E0121 11:45:36.012519 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:45:36 crc kubenswrapper[4745]: I0121 11:45:36.226730 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerID="cf17f630bbc651b74639cc574f251f5ad32d7a5bf21b8a6bdb7773510298c98f" exitCode=0 Jan 21 11:45:36 crc kubenswrapper[4745]: I0121 11:45:36.226782 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerDied","Data":"cf17f630bbc651b74639cc574f251f5ad32d7a5bf21b8a6bdb7773510298c98f"} Jan 21 11:45:36 crc kubenswrapper[4745]: I0121 11:45:36.226812 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerStarted","Data":"a4d99b1b45bc173eabb1688cc80916a22743141e8b5956d7b9939efac9e7142e"} Jan 21 11:45:38 crc kubenswrapper[4745]: I0121 11:45:38.252271 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerStarted","Data":"de43ca78dfd68165632f507cb265612b06c309553ced61dc42826f20259adee6"} Jan 21 11:45:40 crc kubenswrapper[4745]: I0121 11:45:40.275917 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerID="de43ca78dfd68165632f507cb265612b06c309553ced61dc42826f20259adee6" exitCode=0 Jan 21 11:45:40 crc kubenswrapper[4745]: I0121 11:45:40.276094 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerDied","Data":"de43ca78dfd68165632f507cb265612b06c309553ced61dc42826f20259adee6"} Jan 21 11:45:41 crc kubenswrapper[4745]: I0121 11:45:41.289074 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerStarted","Data":"1121f7ea32f9c94b2f6ea8c69bbab68d87a5a0b5f83449b32d7413efd5031e9f"} Jan 21 11:45:41 crc kubenswrapper[4745]: I0121 11:45:41.326319 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-29cdf" podStartSLOduration=2.7717859689999997 podStartE2EDuration="7.326296743s" podCreationTimestamp="2026-01-21 11:45:34 +0000 UTC" firstStartedPulling="2026-01-21 11:45:36.229950092 +0000 UTC m=+4120.690737690" lastFinishedPulling="2026-01-21 11:45:40.784460866 +0000 UTC m=+4125.245248464" observedRunningTime="2026-01-21 11:45:41.318739987 +0000 UTC m=+4125.779527585" watchObservedRunningTime="2026-01-21 11:45:41.326296743 +0000 UTC m=+4125.787084351" Jan 21 11:45:44 crc kubenswrapper[4745]: I0121 11:45:44.778700 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:44 crc kubenswrapper[4745]: I0121 11:45:44.779085 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:45 crc kubenswrapper[4745]: I0121 11:45:45.827299 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-29cdf" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="registry-server" probeResult="failure" output=< Jan 21 11:45:45 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:45:45 crc kubenswrapper[4745]: > Jan 21 11:45:48 crc kubenswrapper[4745]: I0121 11:45:48.001099 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:45:48 crc kubenswrapper[4745]: E0121 11:45:48.001683 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:45:54 crc kubenswrapper[4745]: I0121 11:45:54.830775 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:54 crc kubenswrapper[4745]: I0121 11:45:54.887319 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:55 crc kubenswrapper[4745]: I0121 11:45:55.076002 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:45:56 crc kubenswrapper[4745]: I0121 11:45:56.438439 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-29cdf" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="registry-server" containerID="cri-o://1121f7ea32f9c94b2f6ea8c69bbab68d87a5a0b5f83449b32d7413efd5031e9f" gracePeriod=2 Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.084385 4745 scope.go:117] "RemoveContainer" containerID="d2e300901e122d4bda836981957b1d222167b06cb3652e09c35113d6087f4a65" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.448771 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerID="1121f7ea32f9c94b2f6ea8c69bbab68d87a5a0b5f83449b32d7413efd5031e9f" exitCode=0 Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.448818 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerDied","Data":"1121f7ea32f9c94b2f6ea8c69bbab68d87a5a0b5f83449b32d7413efd5031e9f"} Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.448844 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-29cdf" event={"ID":"a8086da6-13f4-45e8-964e-a56cc9fcd4ef","Type":"ContainerDied","Data":"a4d99b1b45bc173eabb1688cc80916a22743141e8b5956d7b9939efac9e7142e"} Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.448857 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4d99b1b45bc173eabb1688cc80916a22743141e8b5956d7b9939efac9e7142e" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.558280 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.631114 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities\") pod \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.631164 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content\") pod \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.631192 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqzxj\" (UniqueName: \"kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj\") pod \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\" (UID: \"a8086da6-13f4-45e8-964e-a56cc9fcd4ef\") " Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.631920 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities" (OuterVolumeSpecName: "utilities") pod "a8086da6-13f4-45e8-964e-a56cc9fcd4ef" (UID: "a8086da6-13f4-45e8-964e-a56cc9fcd4ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.632847 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.638443 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj" (OuterVolumeSpecName: "kube-api-access-fqzxj") pod "a8086da6-13f4-45e8-964e-a56cc9fcd4ef" (UID: "a8086da6-13f4-45e8-964e-a56cc9fcd4ef"). InnerVolumeSpecName "kube-api-access-fqzxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.731885 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a8086da6-13f4-45e8-964e-a56cc9fcd4ef" (UID: "a8086da6-13f4-45e8-964e-a56cc9fcd4ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.734611 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:57 crc kubenswrapper[4745]: I0121 11:45:57.734682 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqzxj\" (UniqueName: \"kubernetes.io/projected/a8086da6-13f4-45e8-964e-a56cc9fcd4ef-kube-api-access-fqzxj\") on node \"crc\" DevicePath \"\"" Jan 21 11:45:58 crc kubenswrapper[4745]: I0121 11:45:58.455873 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-29cdf" Jan 21 11:45:58 crc kubenswrapper[4745]: I0121 11:45:58.480368 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:45:58 crc kubenswrapper[4745]: I0121 11:45:58.489402 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-29cdf"] Jan 21 11:46:00 crc kubenswrapper[4745]: I0121 11:46:00.000299 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:46:00 crc kubenswrapper[4745]: E0121 11:46:00.000840 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:46:00 crc kubenswrapper[4745]: I0121 11:46:00.012386 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" path="/var/lib/kubelet/pods/a8086da6-13f4-45e8-964e-a56cc9fcd4ef/volumes" Jan 21 11:46:14 crc kubenswrapper[4745]: I0121 11:46:14.001587 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:46:14 crc kubenswrapper[4745]: E0121 11:46:14.003805 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:46:29 crc kubenswrapper[4745]: I0121 11:46:29.000666 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:46:29 crc kubenswrapper[4745]: I0121 11:46:29.751727 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461"} Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.726281 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:25 crc kubenswrapper[4745]: E0121 11:47:25.727376 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="extract-content" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.727396 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="extract-content" Jan 21 11:47:25 crc kubenswrapper[4745]: E0121 11:47:25.727414 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="extract-utilities" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.727422 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="extract-utilities" Jan 21 11:47:25 crc kubenswrapper[4745]: E0121 11:47:25.727447 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="registry-server" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.727456 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="registry-server" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.727712 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8086da6-13f4-45e8-964e-a56cc9fcd4ef" containerName="registry-server" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.730038 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.739244 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.821516 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.821630 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.821824 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdh2r\" (UniqueName: \"kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.923923 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.924014 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.924166 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdh2r\" (UniqueName: \"kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.924715 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.924931 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:25 crc kubenswrapper[4745]: I0121 11:47:25.947627 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdh2r\" (UniqueName: \"kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r\") pod \"community-operators-2svq2\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:26 crc kubenswrapper[4745]: I0121 11:47:26.051816 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:26 crc kubenswrapper[4745]: I0121 11:47:26.796916 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:27 crc kubenswrapper[4745]: I0121 11:47:27.255014 4745 generic.go:334] "Generic (PLEG): container finished" podID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerID="a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065" exitCode=0 Jan 21 11:47:27 crc kubenswrapper[4745]: I0121 11:47:27.255075 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerDied","Data":"a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065"} Jan 21 11:47:27 crc kubenswrapper[4745]: I0121 11:47:27.255611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerStarted","Data":"7ff78099f001ab28faaf93f960cce05d2a54deb91cdd38c36c46d9759164f973"} Jan 21 11:47:28 crc kubenswrapper[4745]: I0121 11:47:28.265903 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerStarted","Data":"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7"} Jan 21 11:47:29 crc kubenswrapper[4745]: I0121 11:47:29.277715 4745 generic.go:334] "Generic (PLEG): container finished" podID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerID="313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7" exitCode=0 Jan 21 11:47:29 crc kubenswrapper[4745]: I0121 11:47:29.277985 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerDied","Data":"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7"} Jan 21 11:47:30 crc kubenswrapper[4745]: I0121 11:47:30.289329 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerStarted","Data":"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4"} Jan 21 11:47:30 crc kubenswrapper[4745]: I0121 11:47:30.308961 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2svq2" podStartSLOduration=2.885989137 podStartE2EDuration="5.308915429s" podCreationTimestamp="2026-01-21 11:47:25 +0000 UTC" firstStartedPulling="2026-01-21 11:47:27.258646443 +0000 UTC m=+4231.719434041" lastFinishedPulling="2026-01-21 11:47:29.681572735 +0000 UTC m=+4234.142360333" observedRunningTime="2026-01-21 11:47:30.305119496 +0000 UTC m=+4234.765907114" watchObservedRunningTime="2026-01-21 11:47:30.308915429 +0000 UTC m=+4234.769703047" Jan 21 11:47:36 crc kubenswrapper[4745]: I0121 11:47:36.052811 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:36 crc kubenswrapper[4745]: I0121 11:47:36.053782 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:36 crc kubenswrapper[4745]: I0121 11:47:36.107871 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:36 crc kubenswrapper[4745]: I0121 11:47:36.865364 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:36 crc kubenswrapper[4745]: I0121 11:47:36.917714 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:38 crc kubenswrapper[4745]: I0121 11:47:38.786247 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2svq2" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="registry-server" containerID="cri-o://02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4" gracePeriod=2 Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.239842 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.402461 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdh2r\" (UniqueName: \"kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r\") pod \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.402725 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content\") pod \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.402759 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities\") pod \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\" (UID: \"efe3c383-326d-4f84-8c83-fd2e191aaa7c\") " Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.403667 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities" (OuterVolumeSpecName: "utilities") pod "efe3c383-326d-4f84-8c83-fd2e191aaa7c" (UID: "efe3c383-326d-4f84-8c83-fd2e191aaa7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.415692 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r" (OuterVolumeSpecName: "kube-api-access-rdh2r") pod "efe3c383-326d-4f84-8c83-fd2e191aaa7c" (UID: "efe3c383-326d-4f84-8c83-fd2e191aaa7c"). InnerVolumeSpecName "kube-api-access-rdh2r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.458727 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "efe3c383-326d-4f84-8c83-fd2e191aaa7c" (UID: "efe3c383-326d-4f84-8c83-fd2e191aaa7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.504715 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.504745 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/efe3c383-326d-4f84-8c83-fd2e191aaa7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.504754 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rdh2r\" (UniqueName: \"kubernetes.io/projected/efe3c383-326d-4f84-8c83-fd2e191aaa7c-kube-api-access-rdh2r\") on node \"crc\" DevicePath \"\"" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.798508 4745 generic.go:334] "Generic (PLEG): container finished" podID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerID="02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4" exitCode=0 Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.798570 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerDied","Data":"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4"} Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.798967 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2svq2" event={"ID":"efe3c383-326d-4f84-8c83-fd2e191aaa7c","Type":"ContainerDied","Data":"7ff78099f001ab28faaf93f960cce05d2a54deb91cdd38c36c46d9759164f973"} Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.798597 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2svq2" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.798999 4745 scope.go:117] "RemoveContainer" containerID="02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.824160 4745 scope.go:117] "RemoveContainer" containerID="313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.840817 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.852992 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2svq2"] Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.874291 4745 scope.go:117] "RemoveContainer" containerID="a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.930707 4745 scope.go:117] "RemoveContainer" containerID="02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4" Jan 21 11:47:39 crc kubenswrapper[4745]: E0121 11:47:39.932009 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4\": container with ID starting with 02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4 not found: ID does not exist" containerID="02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.932064 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4"} err="failed to get container status \"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4\": rpc error: code = NotFound desc = could not find container \"02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4\": container with ID starting with 02c1860112d4e1440f1921cb78b330294194a2918fa0561f89065705eb9e6cd4 not found: ID does not exist" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.932090 4745 scope.go:117] "RemoveContainer" containerID="313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7" Jan 21 11:47:39 crc kubenswrapper[4745]: E0121 11:47:39.932594 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7\": container with ID starting with 313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7 not found: ID does not exist" containerID="313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.932648 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7"} err="failed to get container status \"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7\": rpc error: code = NotFound desc = could not find container \"313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7\": container with ID starting with 313ff4896a9010e6830c8a95781a4f1f90a6c2dae8f7104de76a6278a5351cf7 not found: ID does not exist" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.932676 4745 scope.go:117] "RemoveContainer" containerID="a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065" Jan 21 11:47:39 crc kubenswrapper[4745]: E0121 11:47:39.933135 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065\": container with ID starting with a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065 not found: ID does not exist" containerID="a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065" Jan 21 11:47:39 crc kubenswrapper[4745]: I0121 11:47:39.933182 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065"} err="failed to get container status \"a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065\": rpc error: code = NotFound desc = could not find container \"a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065\": container with ID starting with a668e6486d30ce351a3bd75c2fe4bd0577902e2b50b55f7405727e363a749065 not found: ID does not exist" Jan 21 11:47:40 crc kubenswrapper[4745]: I0121 11:47:40.018871 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" path="/var/lib/kubelet/pods/efe3c383-326d-4f84-8c83-fd2e191aaa7c/volumes" Jan 21 11:48:38 crc kubenswrapper[4745]: I0121 11:48:38.368871 4745 generic.go:334] "Generic (PLEG): container finished" podID="7dc068ac-4289-4996-8263-d1db282282cd" containerID="e59c725bc540d2a468b644ded7efc006be676656febd5573d80a727dcd06cb5f" exitCode=1 Jan 21 11:48:38 crc kubenswrapper[4745]: I0121 11:48:38.368960 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"7dc068ac-4289-4996-8263-d1db282282cd","Type":"ContainerDied","Data":"e59c725bc540d2a468b644ded7efc006be676656febd5573d80a727dcd06cb5f"} Jan 21 11:48:39 crc kubenswrapper[4745]: I0121 11:48:39.940785 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.031962 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032016 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032101 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032150 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032196 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n46jc\" (UniqueName: \"kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032461 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032559 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032587 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.032612 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"7dc068ac-4289-4996-8263-d1db282282cd\" (UID: \"7dc068ac-4289-4996-8263-d1db282282cd\") " Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.034855 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.035569 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data" (OuterVolumeSpecName: "config-data") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.041623 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.047114 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc" (OuterVolumeSpecName: "kube-api-access-n46jc") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "kube-api-access-n46jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066122 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 21 11:48:40 crc kubenswrapper[4745]: E0121 11:48:40.066493 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="extract-content" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066509 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="extract-content" Jan 21 11:48:40 crc kubenswrapper[4745]: E0121 11:48:40.066522 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dc068ac-4289-4996-8263-d1db282282cd" containerName="tempest-tests-tempest-tests-runner" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066613 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dc068ac-4289-4996-8263-d1db282282cd" containerName="tempest-tests-tempest-tests-runner" Jan 21 11:48:40 crc kubenswrapper[4745]: E0121 11:48:40.066646 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="extract-utilities" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066653 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="extract-utilities" Jan 21 11:48:40 crc kubenswrapper[4745]: E0121 11:48:40.066664 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="registry-server" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066670 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="registry-server" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066831 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dc068ac-4289-4996-8263-d1db282282cd" containerName="tempest-tests-tempest-tests-runner" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.066853 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="efe3c383-326d-4f84-8c83-fd2e191aaa7c" containerName="registry-server" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.067482 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.073478 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.074968 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.080722 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.101852 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.107093 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.109859 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.114023 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.136736 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.136983 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.137086 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.137212 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.137325 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.137473 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.137615 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138664 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138768 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggfw5\" (UniqueName: \"kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138928 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138968 4745 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138982 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n46jc\" (UniqueName: \"kubernetes.io/projected/7dc068ac-4289-4996-8263-d1db282282cd-kube-api-access-n46jc\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.138996 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.139008 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.139048 4745 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.139060 4745 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/7dc068ac-4289-4996-8263-d1db282282cd-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.150732 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "7dc068ac-4289-4996-8263-d1db282282cd" (UID: "7dc068ac-4289-4996-8263-d1db282282cd"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.169016 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.240798 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.240869 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggfw5\" (UniqueName: \"kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241162 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241202 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241233 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241269 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241293 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241314 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241566 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241701 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.241904 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/7dc068ac-4289-4996-8263-d1db282282cd-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.243427 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.243522 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.246514 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.247337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.247406 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.260063 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggfw5\" (UniqueName: \"kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.385829 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"7dc068ac-4289-4996-8263-d1db282282cd","Type":"ContainerDied","Data":"1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840"} Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.385869 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b08c01e75f8d555bd7da88d2b5ea781b439b5b6bf62460f299418cb1e4c0840" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.385962 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 21 11:48:40 crc kubenswrapper[4745]: I0121 11:48:40.517173 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 11:48:41 crc kubenswrapper[4745]: I0121 11:48:41.064123 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 21 11:48:41 crc kubenswrapper[4745]: I0121 11:48:41.395244 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"58f0330f-8bbd-440b-8396-79f1976798af","Type":"ContainerStarted","Data":"7b48bd1471dfb040ee9cc8077e3c6fa9d4fcce58350b52499654ff709ba69019"} Jan 21 11:48:43 crc kubenswrapper[4745]: I0121 11:48:43.412367 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"58f0330f-8bbd-440b-8396-79f1976798af","Type":"ContainerStarted","Data":"cc316e9d31bb8db0baccb422f1fffe7c20ec226cb3ae73ea30fe41eae7a07b76"} Jan 21 11:48:43 crc kubenswrapper[4745]: I0121 11:48:43.433460 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=3.4334427290000002 podStartE2EDuration="3.433442729s" podCreationTimestamp="2026-01-21 11:48:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:48:43.428225357 +0000 UTC m=+4307.889012975" watchObservedRunningTime="2026-01-21 11:48:43.433442729 +0000 UTC m=+4307.894230347" Jan 21 11:48:45 crc kubenswrapper[4745]: I0121 11:48:45.866308 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:48:45 crc kubenswrapper[4745]: I0121 11:48:45.867234 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:49:15 crc kubenswrapper[4745]: I0121 11:49:15.867402 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:49:15 crc kubenswrapper[4745]: I0121 11:49:15.867925 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.613798 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.617321 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.628443 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789236 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789585 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789610 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789640 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.789664 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8g49\" (UniqueName: \"kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.790271 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.891914 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892002 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892031 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892069 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892107 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892139 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8g49\" (UniqueName: \"kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.892180 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.899354 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.902897 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.903331 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.907132 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.909292 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.913964 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8g49\" (UniqueName: \"kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.919337 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config\") pod \"neutron-dd7dc574f-plxsl\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:35 crc kubenswrapper[4745]: I0121 11:49:35.933808 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:36 crc kubenswrapper[4745]: I0121 11:49:36.516513 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 11:49:36 crc kubenswrapper[4745]: I0121 11:49:36.942637 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerStarted","Data":"1e266ca71a6c96a7ab86acf0170d9dd168eef0dc55b00c273f16d32d051129c3"} Jan 21 11:49:37 crc kubenswrapper[4745]: I0121 11:49:37.953789 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerStarted","Data":"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c"} Jan 21 11:49:37 crc kubenswrapper[4745]: I0121 11:49:37.954350 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:49:37 crc kubenswrapper[4745]: I0121 11:49:37.954364 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerStarted","Data":"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc"} Jan 21 11:49:37 crc kubenswrapper[4745]: I0121 11:49:37.984394 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-dd7dc574f-plxsl" podStartSLOduration=2.984368678 podStartE2EDuration="2.984368678s" podCreationTimestamp="2026-01-21 11:49:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 11:49:37.973316167 +0000 UTC m=+4362.434103765" watchObservedRunningTime="2026-01-21 11:49:37.984368678 +0000 UTC m=+4362.445156286" Jan 21 11:49:45 crc kubenswrapper[4745]: I0121 11:49:45.866810 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:49:45 crc kubenswrapper[4745]: I0121 11:49:45.867968 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:49:45 crc kubenswrapper[4745]: I0121 11:49:45.868510 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:49:45 crc kubenswrapper[4745]: I0121 11:49:45.870032 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:49:45 crc kubenswrapper[4745]: I0121 11:49:45.870145 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461" gracePeriod=600 Jan 21 11:49:47 crc kubenswrapper[4745]: I0121 11:49:47.032848 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461" exitCode=0 Jan 21 11:49:47 crc kubenswrapper[4745]: I0121 11:49:47.033359 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461"} Jan 21 11:49:47 crc kubenswrapper[4745]: I0121 11:49:47.033385 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a"} Jan 21 11:49:47 crc kubenswrapper[4745]: I0121 11:49:47.033402 4745 scope.go:117] "RemoveContainer" containerID="a90d44c671e69335238cf732da141d173293d9002c07d9d8dce4b94c76f4dbf6" Jan 21 11:50:05 crc kubenswrapper[4745]: I0121 11:50:05.947668 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 11:50:06 crc kubenswrapper[4745]: I0121 11:50:06.030496 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 11:50:06 crc kubenswrapper[4745]: I0121 11:50:06.030817 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68d7f877d9-dj8vd" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-api" containerID="cri-o://90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1" gracePeriod=30 Jan 21 11:50:06 crc kubenswrapper[4745]: I0121 11:50:06.031106 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-68d7f877d9-dj8vd" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-httpd" containerID="cri-o://bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2" gracePeriod=30 Jan 21 11:50:07 crc kubenswrapper[4745]: I0121 11:50:07.248804 4745 generic.go:334] "Generic (PLEG): container finished" podID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerID="bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2" exitCode=0 Jan 21 11:50:07 crc kubenswrapper[4745]: I0121 11:50:07.248961 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerDied","Data":"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2"} Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.165210 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.308547 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.308639 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.308749 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht785\" (UniqueName: \"kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.308810 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.308927 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.309009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.309042 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs\") pod \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\" (UID: \"45d1693d-5ab9-46b2-a4dd-de325b074f0f\") " Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.332791 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.333070 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785" (OuterVolumeSpecName: "kube-api-access-ht785") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "kube-api-access-ht785". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.365328 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.376333 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.382399 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.383520 4745 generic.go:334] "Generic (PLEG): container finished" podID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerID="90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1" exitCode=0 Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.383667 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerDied","Data":"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1"} Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.383702 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68d7f877d9-dj8vd" event={"ID":"45d1693d-5ab9-46b2-a4dd-de325b074f0f","Type":"ContainerDied","Data":"cc0ed54d479a28cf536d4e719bbe5f526d910506f9cc6688c1387fec410d3a17"} Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.383719 4745 scope.go:117] "RemoveContainer" containerID="bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.383854 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68d7f877d9-dj8vd" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.387831 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config" (OuterVolumeSpecName: "config") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.412595 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht785\" (UniqueName: \"kubernetes.io/projected/45d1693d-5ab9-46b2-a4dd-de325b074f0f-kube-api-access-ht785\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.412639 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.412653 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.413171 4745 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.413188 4745 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.413199 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.416372 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "45d1693d-5ab9-46b2-a4dd-de325b074f0f" (UID: "45d1693d-5ab9-46b2-a4dd-de325b074f0f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.477353 4745 scope.go:117] "RemoveContainer" containerID="90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.510800 4745 scope.go:117] "RemoveContainer" containerID="bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2" Jan 21 11:50:17 crc kubenswrapper[4745]: E0121 11:50:17.511430 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2\": container with ID starting with bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2 not found: ID does not exist" containerID="bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.511482 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2"} err="failed to get container status \"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2\": rpc error: code = NotFound desc = could not find container \"bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2\": container with ID starting with bbc04c2ac1fa5f115911c71d3df200488582a9c2d4f476b4d907d5f761198db2 not found: ID does not exist" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.511514 4745 scope.go:117] "RemoveContainer" containerID="90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1" Jan 21 11:50:17 crc kubenswrapper[4745]: E0121 11:50:17.512018 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1\": container with ID starting with 90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1 not found: ID does not exist" containerID="90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.512070 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1"} err="failed to get container status \"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1\": rpc error: code = NotFound desc = could not find container \"90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1\": container with ID starting with 90bc47c3be294d9f83faec45fc3235424ccde1940758df93bed5896f1a197cd1 not found: ID does not exist" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.514690 4745 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/45d1693d-5ab9-46b2-a4dd-de325b074f0f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.726564 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 11:50:17 crc kubenswrapper[4745]: I0121 11:50:17.734471 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-68d7f877d9-dj8vd"] Jan 21 11:50:18 crc kubenswrapper[4745]: I0121 11:50:18.011479 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" path="/var/lib/kubelet/pods/45d1693d-5ab9-46b2-a4dd-de325b074f0f/volumes" Jan 21 11:51:57 crc kubenswrapper[4745]: I0121 11:51:57.708483 4745 scope.go:117] "RemoveContainer" containerID="cf17f630bbc651b74639cc574f251f5ad32d7a5bf21b8a6bdb7773510298c98f" Jan 21 11:51:58 crc kubenswrapper[4745]: I0121 11:51:57.744378 4745 scope.go:117] "RemoveContainer" containerID="de43ca78dfd68165632f507cb265612b06c309553ced61dc42826f20259adee6" Jan 21 11:51:58 crc kubenswrapper[4745]: I0121 11:51:57.780920 4745 scope.go:117] "RemoveContainer" containerID="1121f7ea32f9c94b2f6ea8c69bbab68d87a5a0b5f83449b32d7413efd5031e9f" Jan 21 11:52:15 crc kubenswrapper[4745]: I0121 11:52:15.866719 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:52:15 crc kubenswrapper[4745]: I0121 11:52:15.867142 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:52:45 crc kubenswrapper[4745]: I0121 11:52:45.866512 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:52:45 crc kubenswrapper[4745]: I0121 11:52:45.867165 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:53:15 crc kubenswrapper[4745]: I0121 11:53:15.866958 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 11:53:15 crc kubenswrapper[4745]: I0121 11:53:15.867631 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 11:53:15 crc kubenswrapper[4745]: I0121 11:53:15.867682 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 11:53:15 crc kubenswrapper[4745]: I0121 11:53:15.870862 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 11:53:15 crc kubenswrapper[4745]: I0121 11:53:15.871272 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" gracePeriod=600 Jan 21 11:53:16 crc kubenswrapper[4745]: I0121 11:53:16.091487 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" exitCode=0 Jan 21 11:53:16 crc kubenswrapper[4745]: I0121 11:53:16.091583 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a"} Jan 21 11:53:16 crc kubenswrapper[4745]: I0121 11:53:16.091682 4745 scope.go:117] "RemoveContainer" containerID="13e50ee1240970d1c66d00ecf138395936664d814d86f9376fca4af53de8a461" Jan 21 11:53:16 crc kubenswrapper[4745]: E0121 11:53:16.366805 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:53:17 crc kubenswrapper[4745]: I0121 11:53:17.101997 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:53:17 crc kubenswrapper[4745]: E0121 11:53:17.102364 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:53:31 crc kubenswrapper[4745]: I0121 11:53:31.000648 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:53:31 crc kubenswrapper[4745]: E0121 11:53:31.001434 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:53:44 crc kubenswrapper[4745]: I0121 11:53:43.999802 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:53:44 crc kubenswrapper[4745]: E0121 11:53:44.000343 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:53:56 crc kubenswrapper[4745]: I0121 11:53:56.010308 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:53:56 crc kubenswrapper[4745]: E0121 11:53:56.011105 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:54:09 crc kubenswrapper[4745]: I0121 11:54:09.000155 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:54:09 crc kubenswrapper[4745]: E0121 11:54:09.002221 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:54:20 crc kubenswrapper[4745]: I0121 11:54:20.001114 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:54:20 crc kubenswrapper[4745]: E0121 11:54:20.002119 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.211266 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:26 crc kubenswrapper[4745]: E0121 11:54:26.212429 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-httpd" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.212456 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-httpd" Jan 21 11:54:26 crc kubenswrapper[4745]: E0121 11:54:26.212490 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-api" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.212496 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-api" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.212802 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-httpd" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.212827 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d1693d-5ab9-46b2-a4dd-de325b074f0f" containerName="neutron-api" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.214185 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.235036 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.308085 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9g5d\" (UniqueName: \"kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.308137 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.308226 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.410352 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9g5d\" (UniqueName: \"kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.410406 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.410436 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.411462 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.411673 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.433397 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9g5d\" (UniqueName: \"kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d\") pod \"redhat-marketplace-zh5g8\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:26 crc kubenswrapper[4745]: I0121 11:54:26.534779 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:27 crc kubenswrapper[4745]: I0121 11:54:27.183778 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:27 crc kubenswrapper[4745]: I0121 11:54:27.976748 4745 generic.go:334] "Generic (PLEG): container finished" podID="64b5132c-39d4-4601-8990-44e07f3e381a" containerID="56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5" exitCode=0 Jan 21 11:54:27 crc kubenswrapper[4745]: I0121 11:54:27.976815 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerDied","Data":"56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5"} Jan 21 11:54:27 crc kubenswrapper[4745]: I0121 11:54:27.977058 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerStarted","Data":"9034809be3eb81ef4e7b96ceb794650aecd0c12e105304999c4fa3f88eed97b7"} Jan 21 11:54:27 crc kubenswrapper[4745]: I0121 11:54:27.980094 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 11:54:29 crc kubenswrapper[4745]: I0121 11:54:29.995965 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerStarted","Data":"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708"} Jan 21 11:54:31 crc kubenswrapper[4745]: I0121 11:54:31.005850 4745 generic.go:334] "Generic (PLEG): container finished" podID="64b5132c-39d4-4601-8990-44e07f3e381a" containerID="7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708" exitCode=0 Jan 21 11:54:31 crc kubenswrapper[4745]: I0121 11:54:31.005895 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerDied","Data":"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708"} Jan 21 11:54:32 crc kubenswrapper[4745]: I0121 11:54:32.027542 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerStarted","Data":"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377"} Jan 21 11:54:32 crc kubenswrapper[4745]: I0121 11:54:32.050402 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zh5g8" podStartSLOduration=2.327283238 podStartE2EDuration="6.05031959s" podCreationTimestamp="2026-01-21 11:54:26 +0000 UTC" firstStartedPulling="2026-01-21 11:54:27.979146603 +0000 UTC m=+4652.439934201" lastFinishedPulling="2026-01-21 11:54:31.702182945 +0000 UTC m=+4656.162970553" observedRunningTime="2026-01-21 11:54:32.046637499 +0000 UTC m=+4656.507425097" watchObservedRunningTime="2026-01-21 11:54:32.05031959 +0000 UTC m=+4656.511107188" Jan 21 11:54:35 crc kubenswrapper[4745]: I0121 11:54:35.000782 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:54:35 crc kubenswrapper[4745]: E0121 11:54:35.001516 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:54:36 crc kubenswrapper[4745]: I0121 11:54:36.535042 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:36 crc kubenswrapper[4745]: I0121 11:54:36.535335 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:37 crc kubenswrapper[4745]: I0121 11:54:37.407951 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:37 crc kubenswrapper[4745]: I0121 11:54:37.486340 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:37 crc kubenswrapper[4745]: I0121 11:54:37.645238 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.098028 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zh5g8" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="registry-server" containerID="cri-o://92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377" gracePeriod=2 Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.720230 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.885195 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities\") pod \"64b5132c-39d4-4601-8990-44e07f3e381a\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.885381 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9g5d\" (UniqueName: \"kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d\") pod \"64b5132c-39d4-4601-8990-44e07f3e381a\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.886559 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities" (OuterVolumeSpecName: "utilities") pod "64b5132c-39d4-4601-8990-44e07f3e381a" (UID: "64b5132c-39d4-4601-8990-44e07f3e381a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.886910 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content\") pod \"64b5132c-39d4-4601-8990-44e07f3e381a\" (UID: \"64b5132c-39d4-4601-8990-44e07f3e381a\") " Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.887853 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.899740 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d" (OuterVolumeSpecName: "kube-api-access-z9g5d") pod "64b5132c-39d4-4601-8990-44e07f3e381a" (UID: "64b5132c-39d4-4601-8990-44e07f3e381a"). InnerVolumeSpecName "kube-api-access-z9g5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.930211 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64b5132c-39d4-4601-8990-44e07f3e381a" (UID: "64b5132c-39d4-4601-8990-44e07f3e381a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.990707 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9g5d\" (UniqueName: \"kubernetes.io/projected/64b5132c-39d4-4601-8990-44e07f3e381a-kube-api-access-z9g5d\") on node \"crc\" DevicePath \"\"" Jan 21 11:54:39 crc kubenswrapper[4745]: I0121 11:54:39.990747 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64b5132c-39d4-4601-8990-44e07f3e381a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.110173 4745 generic.go:334] "Generic (PLEG): container finished" podID="64b5132c-39d4-4601-8990-44e07f3e381a" containerID="92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377" exitCode=0 Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.110282 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerDied","Data":"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377"} Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.110304 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zh5g8" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.110332 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zh5g8" event={"ID":"64b5132c-39d4-4601-8990-44e07f3e381a","Type":"ContainerDied","Data":"9034809be3eb81ef4e7b96ceb794650aecd0c12e105304999c4fa3f88eed97b7"} Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.110386 4745 scope.go:117] "RemoveContainer" containerID="92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.146803 4745 scope.go:117] "RemoveContainer" containerID="7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.149170 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.159936 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zh5g8"] Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.183662 4745 scope.go:117] "RemoveContainer" containerID="56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.244080 4745 scope.go:117] "RemoveContainer" containerID="92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377" Jan 21 11:54:40 crc kubenswrapper[4745]: E0121 11:54:40.244611 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377\": container with ID starting with 92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377 not found: ID does not exist" containerID="92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.244667 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377"} err="failed to get container status \"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377\": rpc error: code = NotFound desc = could not find container \"92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377\": container with ID starting with 92b036bd7713a20b34b0db001078fa3d4cd3f92256d89e81140324f26a482377 not found: ID does not exist" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.244726 4745 scope.go:117] "RemoveContainer" containerID="7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708" Jan 21 11:54:40 crc kubenswrapper[4745]: E0121 11:54:40.245556 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708\": container with ID starting with 7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708 not found: ID does not exist" containerID="7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.245586 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708"} err="failed to get container status \"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708\": rpc error: code = NotFound desc = could not find container \"7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708\": container with ID starting with 7f8a1f0ff65023cf0d99edccf3a1832beb8a8830eedf0a3d5dbd96b33e67c708 not found: ID does not exist" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.245601 4745 scope.go:117] "RemoveContainer" containerID="56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5" Jan 21 11:54:40 crc kubenswrapper[4745]: E0121 11:54:40.245995 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5\": container with ID starting with 56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5 not found: ID does not exist" containerID="56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5" Jan 21 11:54:40 crc kubenswrapper[4745]: I0121 11:54:40.246024 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5"} err="failed to get container status \"56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5\": rpc error: code = NotFound desc = could not find container \"56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5\": container with ID starting with 56517a12d80a7a5419af6d818a2a3745cb6df8ebb52c1e051b1885ab1ec554e5 not found: ID does not exist" Jan 21 11:54:42 crc kubenswrapper[4745]: I0121 11:54:42.010600 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" path="/var/lib/kubelet/pods/64b5132c-39d4-4601-8990-44e07f3e381a/volumes" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.132680 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:54:45 crc kubenswrapper[4745]: E0121 11:54:45.135367 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="extract-utilities" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.135438 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="extract-utilities" Jan 21 11:54:45 crc kubenswrapper[4745]: E0121 11:54:45.135546 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="extract-content" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.135613 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="extract-content" Jan 21 11:54:45 crc kubenswrapper[4745]: E0121 11:54:45.135684 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="registry-server" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.135743 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="registry-server" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.135984 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b5132c-39d4-4601-8990-44e07f3e381a" containerName="registry-server" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.138401 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.207441 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.297436 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.297785 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flwkc\" (UniqueName: \"kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.297898 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.399650 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.399757 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flwkc\" (UniqueName: \"kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.399798 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.400463 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.400564 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.432718 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flwkc\" (UniqueName: \"kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc\") pod \"redhat-operators-4pbk4\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.465275 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:45 crc kubenswrapper[4745]: I0121 11:54:45.990127 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:54:46 crc kubenswrapper[4745]: I0121 11:54:46.202493 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerStarted","Data":"18f9138210b5bbc091e196f64077b05fb5f731137643caf9f6ef70ad3e12a090"} Jan 21 11:54:47 crc kubenswrapper[4745]: I0121 11:54:47.221097 4745 generic.go:334] "Generic (PLEG): container finished" podID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerID="c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33" exitCode=0 Jan 21 11:54:47 crc kubenswrapper[4745]: I0121 11:54:47.221265 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerDied","Data":"c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33"} Jan 21 11:54:48 crc kubenswrapper[4745]: I0121 11:54:48.000242 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:54:48 crc kubenswrapper[4745]: E0121 11:54:48.000552 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:54:49 crc kubenswrapper[4745]: I0121 11:54:49.248070 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerStarted","Data":"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924"} Jan 21 11:54:52 crc kubenswrapper[4745]: I0121 11:54:52.290188 4745 generic.go:334] "Generic (PLEG): container finished" podID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerID="e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924" exitCode=0 Jan 21 11:54:52 crc kubenswrapper[4745]: I0121 11:54:52.290271 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerDied","Data":"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924"} Jan 21 11:54:53 crc kubenswrapper[4745]: I0121 11:54:53.309336 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerStarted","Data":"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635"} Jan 21 11:54:53 crc kubenswrapper[4745]: I0121 11:54:53.350772 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4pbk4" podStartSLOduration=2.836007475 podStartE2EDuration="8.350748601s" podCreationTimestamp="2026-01-21 11:54:45 +0000 UTC" firstStartedPulling="2026-01-21 11:54:47.223754269 +0000 UTC m=+4671.684541907" lastFinishedPulling="2026-01-21 11:54:52.738495435 +0000 UTC m=+4677.199283033" observedRunningTime="2026-01-21 11:54:53.339197036 +0000 UTC m=+4677.799984634" watchObservedRunningTime="2026-01-21 11:54:53.350748601 +0000 UTC m=+4677.811536199" Jan 21 11:54:55 crc kubenswrapper[4745]: I0121 11:54:55.465947 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:55 crc kubenswrapper[4745]: I0121 11:54:55.466403 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:54:56 crc kubenswrapper[4745]: I0121 11:54:56.530377 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4pbk4" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" probeResult="failure" output=< Jan 21 11:54:56 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:54:56 crc kubenswrapper[4745]: > Jan 21 11:55:01 crc kubenswrapper[4745]: I0121 11:55:01.001088 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:55:01 crc kubenswrapper[4745]: E0121 11:55:01.002000 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:55:06 crc kubenswrapper[4745]: I0121 11:55:06.512264 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4pbk4" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" probeResult="failure" output=< Jan 21 11:55:06 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:55:06 crc kubenswrapper[4745]: > Jan 21 11:55:15 crc kubenswrapper[4745]: I0121 11:55:15.527020 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:55:15 crc kubenswrapper[4745]: I0121 11:55:15.603104 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:55:16 crc kubenswrapper[4745]: I0121 11:55:16.008215 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:55:16 crc kubenswrapper[4745]: E0121 11:55:16.008568 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:55:16 crc kubenswrapper[4745]: I0121 11:55:16.326076 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:55:17 crc kubenswrapper[4745]: I0121 11:55:17.538985 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4pbk4" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" containerID="cri-o://6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635" gracePeriod=2 Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.037781 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.128571 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flwkc\" (UniqueName: \"kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc\") pod \"08ce6d1b-2f35-473a-a4df-dc525e24f554\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.128977 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities\") pod \"08ce6d1b-2f35-473a-a4df-dc525e24f554\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.129098 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content\") pod \"08ce6d1b-2f35-473a-a4df-dc525e24f554\" (UID: \"08ce6d1b-2f35-473a-a4df-dc525e24f554\") " Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.129742 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities" (OuterVolumeSpecName: "utilities") pod "08ce6d1b-2f35-473a-a4df-dc525e24f554" (UID: "08ce6d1b-2f35-473a-a4df-dc525e24f554"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.140868 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc" (OuterVolumeSpecName: "kube-api-access-flwkc") pod "08ce6d1b-2f35-473a-a4df-dc525e24f554" (UID: "08ce6d1b-2f35-473a-a4df-dc525e24f554"). InnerVolumeSpecName "kube-api-access-flwkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.231091 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flwkc\" (UniqueName: \"kubernetes.io/projected/08ce6d1b-2f35-473a-a4df-dc525e24f554-kube-api-access-flwkc\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.231121 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.252132 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08ce6d1b-2f35-473a-a4df-dc525e24f554" (UID: "08ce6d1b-2f35-473a-a4df-dc525e24f554"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.342180 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08ce6d1b-2f35-473a-a4df-dc525e24f554-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.552577 4745 generic.go:334] "Generic (PLEG): container finished" podID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerID="6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635" exitCode=0 Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.552635 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerDied","Data":"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635"} Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.552666 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pbk4" event={"ID":"08ce6d1b-2f35-473a-a4df-dc525e24f554","Type":"ContainerDied","Data":"18f9138210b5bbc091e196f64077b05fb5f731137643caf9f6ef70ad3e12a090"} Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.552687 4745 scope.go:117] "RemoveContainer" containerID="6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.552772 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pbk4" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.582962 4745 scope.go:117] "RemoveContainer" containerID="e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.608012 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.622291 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4pbk4"] Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.634721 4745 scope.go:117] "RemoveContainer" containerID="c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.690188 4745 scope.go:117] "RemoveContainer" containerID="6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635" Jan 21 11:55:18 crc kubenswrapper[4745]: E0121 11:55:18.690640 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635\": container with ID starting with 6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635 not found: ID does not exist" containerID="6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.690789 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635"} err="failed to get container status \"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635\": rpc error: code = NotFound desc = could not find container \"6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635\": container with ID starting with 6843ed78b6b9b92dea39fc67d52f24c3d9c1bced70d09ea5d09c8377fe1df635 not found: ID does not exist" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.690896 4745 scope.go:117] "RemoveContainer" containerID="e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924" Jan 21 11:55:18 crc kubenswrapper[4745]: E0121 11:55:18.691608 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924\": container with ID starting with e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924 not found: ID does not exist" containerID="e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.691649 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924"} err="failed to get container status \"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924\": rpc error: code = NotFound desc = could not find container \"e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924\": container with ID starting with e94cb26c9d00c6836bdd84a89a76c0eaebe840dcea29a0da57e70c49ccb53924 not found: ID does not exist" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.691682 4745 scope.go:117] "RemoveContainer" containerID="c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33" Jan 21 11:55:18 crc kubenswrapper[4745]: E0121 11:55:18.692159 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33\": container with ID starting with c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33 not found: ID does not exist" containerID="c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33" Jan 21 11:55:18 crc kubenswrapper[4745]: I0121 11:55:18.692242 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33"} err="failed to get container status \"c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33\": rpc error: code = NotFound desc = could not find container \"c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33\": container with ID starting with c29117f1de861421613b6c52ecb5bb09f14e9c114b4c69a4b6fcfa69fa1ada33 not found: ID does not exist" Jan 21 11:55:20 crc kubenswrapper[4745]: I0121 11:55:20.015087 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" path="/var/lib/kubelet/pods/08ce6d1b-2f35-473a-a4df-dc525e24f554/volumes" Jan 21 11:55:28 crc kubenswrapper[4745]: I0121 11:55:28.001115 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:55:28 crc kubenswrapper[4745]: E0121 11:55:28.001936 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:55:42 crc kubenswrapper[4745]: I0121 11:55:42.000540 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:55:42 crc kubenswrapper[4745]: E0121 11:55:42.001372 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:55:56 crc kubenswrapper[4745]: I0121 11:55:56.009114 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:55:56 crc kubenswrapper[4745]: E0121 11:55:56.010269 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.046287 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:55:59 crc kubenswrapper[4745]: E0121 11:55:59.047319 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="extract-utilities" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.047339 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="extract-utilities" Jan 21 11:55:59 crc kubenswrapper[4745]: E0121 11:55:59.047387 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="extract-content" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.047395 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="extract-content" Jan 21 11:55:59 crc kubenswrapper[4745]: E0121 11:55:59.047407 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.047416 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.047653 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="08ce6d1b-2f35-473a-a4df-dc525e24f554" containerName="registry-server" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.049328 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.059250 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.167147 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dd5h\" (UniqueName: \"kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.167237 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.167470 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.269955 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dd5h\" (UniqueName: \"kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.270085 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.270268 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.270673 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.270719 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.293162 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dd5h\" (UniqueName: \"kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h\") pod \"certified-operators-8vd52\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:55:59 crc kubenswrapper[4745]: I0121 11:55:59.376191 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:00 crc kubenswrapper[4745]: I0121 11:56:00.122411 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:56:00 crc kubenswrapper[4745]: I0121 11:56:00.954319 4745 generic.go:334] "Generic (PLEG): container finished" podID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerID="3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd" exitCode=0 Jan 21 11:56:00 crc kubenswrapper[4745]: I0121 11:56:00.954557 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerDied","Data":"3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd"} Jan 21 11:56:00 crc kubenswrapper[4745]: I0121 11:56:00.954657 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerStarted","Data":"1e682269321fde376af599e0afa3b41fabaa9af6cac3abcceb417d32aa746199"} Jan 21 11:56:07 crc kubenswrapper[4745]: I0121 11:56:07.001302 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:56:07 crc kubenswrapper[4745]: E0121 11:56:07.003175 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:56:08 crc kubenswrapper[4745]: I0121 11:56:08.017991 4745 generic.go:334] "Generic (PLEG): container finished" podID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerID="c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e" exitCode=0 Jan 21 11:56:08 crc kubenswrapper[4745]: I0121 11:56:08.018066 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerDied","Data":"c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e"} Jan 21 11:56:09 crc kubenswrapper[4745]: I0121 11:56:09.032356 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerStarted","Data":"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012"} Jan 21 11:56:09 crc kubenswrapper[4745]: I0121 11:56:09.054268 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8vd52" podStartSLOduration=3.559678768 podStartE2EDuration="11.054245696s" podCreationTimestamp="2026-01-21 11:55:58 +0000 UTC" firstStartedPulling="2026-01-21 11:56:00.956165081 +0000 UTC m=+4745.416952679" lastFinishedPulling="2026-01-21 11:56:08.450732009 +0000 UTC m=+4752.911519607" observedRunningTime="2026-01-21 11:56:09.050875203 +0000 UTC m=+4753.511662801" watchObservedRunningTime="2026-01-21 11:56:09.054245696 +0000 UTC m=+4753.515033294" Jan 21 11:56:09 crc kubenswrapper[4745]: I0121 11:56:09.376418 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:09 crc kubenswrapper[4745]: I0121 11:56:09.376473 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:10 crc kubenswrapper[4745]: I0121 11:56:10.444498 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8vd52" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="registry-server" probeResult="failure" output=< Jan 21 11:56:10 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 11:56:10 crc kubenswrapper[4745]: > Jan 21 11:56:18 crc kubenswrapper[4745]: I0121 11:56:18.000286 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:56:18 crc kubenswrapper[4745]: E0121 11:56:18.001150 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:56:19 crc kubenswrapper[4745]: I0121 11:56:19.447771 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:19 crc kubenswrapper[4745]: I0121 11:56:19.529006 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:19 crc kubenswrapper[4745]: I0121 11:56:19.694657 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.142832 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8vd52" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="registry-server" containerID="cri-o://6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012" gracePeriod=2 Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.736031 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.926545 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dd5h\" (UniqueName: \"kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h\") pod \"e842fadf-aa06-4207-9826-a3dc39623fa6\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.926666 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities\") pod \"e842fadf-aa06-4207-9826-a3dc39623fa6\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.926745 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content\") pod \"e842fadf-aa06-4207-9826-a3dc39623fa6\" (UID: \"e842fadf-aa06-4207-9826-a3dc39623fa6\") " Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.927432 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities" (OuterVolumeSpecName: "utilities") pod "e842fadf-aa06-4207-9826-a3dc39623fa6" (UID: "e842fadf-aa06-4207-9826-a3dc39623fa6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.933098 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h" (OuterVolumeSpecName: "kube-api-access-7dd5h") pod "e842fadf-aa06-4207-9826-a3dc39623fa6" (UID: "e842fadf-aa06-4207-9826-a3dc39623fa6"). InnerVolumeSpecName "kube-api-access-7dd5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:56:21 crc kubenswrapper[4745]: I0121 11:56:21.987201 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e842fadf-aa06-4207-9826-a3dc39623fa6" (UID: "e842fadf-aa06-4207-9826-a3dc39623fa6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.029169 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.029198 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dd5h\" (UniqueName: \"kubernetes.io/projected/e842fadf-aa06-4207-9826-a3dc39623fa6-kube-api-access-7dd5h\") on node \"crc\" DevicePath \"\"" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.029208 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e842fadf-aa06-4207-9826-a3dc39623fa6-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.154441 4745 generic.go:334] "Generic (PLEG): container finished" podID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerID="6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012" exitCode=0 Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.154482 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerDied","Data":"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012"} Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.154508 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8vd52" event={"ID":"e842fadf-aa06-4207-9826-a3dc39623fa6","Type":"ContainerDied","Data":"1e682269321fde376af599e0afa3b41fabaa9af6cac3abcceb417d32aa746199"} Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.154523 4745 scope.go:117] "RemoveContainer" containerID="6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.154663 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8vd52" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.198005 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.200261 4745 scope.go:117] "RemoveContainer" containerID="c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.227775 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8vd52"] Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.279295 4745 scope.go:117] "RemoveContainer" containerID="3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.326307 4745 scope.go:117] "RemoveContainer" containerID="6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012" Jan 21 11:56:22 crc kubenswrapper[4745]: E0121 11:56:22.327239 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012\": container with ID starting with 6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012 not found: ID does not exist" containerID="6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.327285 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012"} err="failed to get container status \"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012\": rpc error: code = NotFound desc = could not find container \"6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012\": container with ID starting with 6d12fb592ab5465bdb886a258c9d47720f56a00133f093d39b0b09a743c61012 not found: ID does not exist" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.327314 4745 scope.go:117] "RemoveContainer" containerID="c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e" Jan 21 11:56:22 crc kubenswrapper[4745]: E0121 11:56:22.327772 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e\": container with ID starting with c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e not found: ID does not exist" containerID="c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.327832 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e"} err="failed to get container status \"c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e\": rpc error: code = NotFound desc = could not find container \"c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e\": container with ID starting with c219c7b375e55fb44c8e69da20fbfbffbc61a7bf7e317505df69e9b952622a5e not found: ID does not exist" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.327858 4745 scope.go:117] "RemoveContainer" containerID="3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd" Jan 21 11:56:22 crc kubenswrapper[4745]: E0121 11:56:22.328265 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd\": container with ID starting with 3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd not found: ID does not exist" containerID="3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd" Jan 21 11:56:22 crc kubenswrapper[4745]: I0121 11:56:22.328328 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd"} err="failed to get container status \"3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd\": rpc error: code = NotFound desc = could not find container \"3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd\": container with ID starting with 3742137b9617a250c01ae7762ff943e1bf852f677dbf3e46670d1489eb29f1bd not found: ID does not exist" Jan 21 11:56:24 crc kubenswrapper[4745]: I0121 11:56:24.012276 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" path="/var/lib/kubelet/pods/e842fadf-aa06-4207-9826-a3dc39623fa6/volumes" Jan 21 11:56:29 crc kubenswrapper[4745]: I0121 11:56:29.001557 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:56:29 crc kubenswrapper[4745]: E0121 11:56:29.002870 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:56:41 crc kubenswrapper[4745]: I0121 11:56:41.000923 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:56:41 crc kubenswrapper[4745]: E0121 11:56:41.001823 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:56:53 crc kubenswrapper[4745]: I0121 11:56:53.000774 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:56:53 crc kubenswrapper[4745]: E0121 11:56:53.001801 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:04 crc kubenswrapper[4745]: I0121 11:57:04.000281 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:57:04 crc kubenswrapper[4745]: E0121 11:57:04.001346 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:19 crc kubenswrapper[4745]: I0121 11:57:19.000291 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:57:19 crc kubenswrapper[4745]: E0121 11:57:19.001011 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:32 crc kubenswrapper[4745]: I0121 11:57:32.379615 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:57:32 crc kubenswrapper[4745]: E0121 11:57:32.400382 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.101664 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:44 crc kubenswrapper[4745]: E0121 11:57:44.102766 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="extract-content" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.102782 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="extract-content" Jan 21 11:57:44 crc kubenswrapper[4745]: E0121 11:57:44.102838 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="registry-server" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.102846 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="registry-server" Jan 21 11:57:44 crc kubenswrapper[4745]: E0121 11:57:44.102854 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="extract-utilities" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.102862 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="extract-utilities" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.103104 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e842fadf-aa06-4207-9826-a3dc39623fa6" containerName="registry-server" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.104764 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.113475 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.174111 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.174292 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-569xv\" (UniqueName: \"kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.174395 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.276798 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-569xv\" (UniqueName: \"kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.277167 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.277441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.277560 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.277911 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.311951 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-569xv\" (UniqueName: \"kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv\") pod \"community-operators-zdbhd\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:44 crc kubenswrapper[4745]: I0121 11:57:44.439393 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:45 crc kubenswrapper[4745]: I0121 11:57:45.000855 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:57:45 crc kubenswrapper[4745]: E0121 11:57:45.001773 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:45 crc kubenswrapper[4745]: I0121 11:57:45.132625 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:46 crc kubenswrapper[4745]: I0121 11:57:46.025193 4745 generic.go:334] "Generic (PLEG): container finished" podID="780ab31e-7972-439a-b72d-e74503fa84ab" containerID="45d19e548bc61458dbbccf683915ec99647429c06cc57cb66892cf0721913382" exitCode=0 Jan 21 11:57:46 crc kubenswrapper[4745]: I0121 11:57:46.025506 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerDied","Data":"45d19e548bc61458dbbccf683915ec99647429c06cc57cb66892cf0721913382"} Jan 21 11:57:46 crc kubenswrapper[4745]: I0121 11:57:46.025641 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerStarted","Data":"4a219e57791e798f4110113dfda96b5fad2068d369cf314493b3aa9f1f954f3b"} Jan 21 11:57:47 crc kubenswrapper[4745]: I0121 11:57:47.042211 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerStarted","Data":"3a0b85445141b80c2041d7c3662b9dc1dd1a24a57a49659750921cd8c513a444"} Jan 21 11:57:48 crc kubenswrapper[4745]: I0121 11:57:48.052262 4745 generic.go:334] "Generic (PLEG): container finished" podID="780ab31e-7972-439a-b72d-e74503fa84ab" containerID="3a0b85445141b80c2041d7c3662b9dc1dd1a24a57a49659750921cd8c513a444" exitCode=0 Jan 21 11:57:48 crc kubenswrapper[4745]: I0121 11:57:48.052316 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerDied","Data":"3a0b85445141b80c2041d7c3662b9dc1dd1a24a57a49659750921cd8c513a444"} Jan 21 11:57:49 crc kubenswrapper[4745]: I0121 11:57:49.063855 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerStarted","Data":"70489c908d1a945c330060162f5e87dec7f1bef2e044ff043d00264b9d51eed4"} Jan 21 11:57:49 crc kubenswrapper[4745]: I0121 11:57:49.090020 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zdbhd" podStartSLOduration=2.638962528 podStartE2EDuration="5.089995883s" podCreationTimestamp="2026-01-21 11:57:44 +0000 UTC" firstStartedPulling="2026-01-21 11:57:46.027947147 +0000 UTC m=+4850.488734755" lastFinishedPulling="2026-01-21 11:57:48.478980502 +0000 UTC m=+4852.939768110" observedRunningTime="2026-01-21 11:57:49.081028359 +0000 UTC m=+4853.541815957" watchObservedRunningTime="2026-01-21 11:57:49.089995883 +0000 UTC m=+4853.550783481" Jan 21 11:57:54 crc kubenswrapper[4745]: I0121 11:57:54.440356 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:54 crc kubenswrapper[4745]: I0121 11:57:54.441057 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:54 crc kubenswrapper[4745]: I0121 11:57:54.713831 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:55 crc kubenswrapper[4745]: I0121 11:57:55.183452 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:55 crc kubenswrapper[4745]: I0121 11:57:55.243139 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:57 crc kubenswrapper[4745]: I0121 11:57:57.142221 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zdbhd" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="registry-server" containerID="cri-o://70489c908d1a945c330060162f5e87dec7f1bef2e044ff043d00264b9d51eed4" gracePeriod=2 Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.002028 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:57:58 crc kubenswrapper[4745]: E0121 11:57:58.003195 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.151992 4745 generic.go:334] "Generic (PLEG): container finished" podID="780ab31e-7972-439a-b72d-e74503fa84ab" containerID="70489c908d1a945c330060162f5e87dec7f1bef2e044ff043d00264b9d51eed4" exitCode=0 Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.152037 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerDied","Data":"70489c908d1a945c330060162f5e87dec7f1bef2e044ff043d00264b9d51eed4"} Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.406904 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.483952 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-569xv\" (UniqueName: \"kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv\") pod \"780ab31e-7972-439a-b72d-e74503fa84ab\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.484199 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content\") pod \"780ab31e-7972-439a-b72d-e74503fa84ab\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.484273 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities\") pod \"780ab31e-7972-439a-b72d-e74503fa84ab\" (UID: \"780ab31e-7972-439a-b72d-e74503fa84ab\") " Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.484860 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities" (OuterVolumeSpecName: "utilities") pod "780ab31e-7972-439a-b72d-e74503fa84ab" (UID: "780ab31e-7972-439a-b72d-e74503fa84ab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.485379 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.491060 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv" (OuterVolumeSpecName: "kube-api-access-569xv") pod "780ab31e-7972-439a-b72d-e74503fa84ab" (UID: "780ab31e-7972-439a-b72d-e74503fa84ab"). InnerVolumeSpecName "kube-api-access-569xv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.540690 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "780ab31e-7972-439a-b72d-e74503fa84ab" (UID: "780ab31e-7972-439a-b72d-e74503fa84ab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.586760 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ab31e-7972-439a-b72d-e74503fa84ab-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 11:57:58 crc kubenswrapper[4745]: I0121 11:57:58.586798 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-569xv\" (UniqueName: \"kubernetes.io/projected/780ab31e-7972-439a-b72d-e74503fa84ab-kube-api-access-569xv\") on node \"crc\" DevicePath \"\"" Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.161862 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zdbhd" event={"ID":"780ab31e-7972-439a-b72d-e74503fa84ab","Type":"ContainerDied","Data":"4a219e57791e798f4110113dfda96b5fad2068d369cf314493b3aa9f1f954f3b"} Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.161925 4745 scope.go:117] "RemoveContainer" containerID="70489c908d1a945c330060162f5e87dec7f1bef2e044ff043d00264b9d51eed4" Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.161946 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zdbhd" Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.187281 4745 scope.go:117] "RemoveContainer" containerID="3a0b85445141b80c2041d7c3662b9dc1dd1a24a57a49659750921cd8c513a444" Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.204694 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.223062 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zdbhd"] Jan 21 11:57:59 crc kubenswrapper[4745]: I0121 11:57:59.226694 4745 scope.go:117] "RemoveContainer" containerID="45d19e548bc61458dbbccf683915ec99647429c06cc57cb66892cf0721913382" Jan 21 11:58:00 crc kubenswrapper[4745]: I0121 11:58:00.011016 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" path="/var/lib/kubelet/pods/780ab31e-7972-439a-b72d-e74503fa84ab/volumes" Jan 21 11:58:12 crc kubenswrapper[4745]: I0121 11:58:12.000057 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:58:12 crc kubenswrapper[4745]: E0121 11:58:12.000881 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 11:58:25 crc kubenswrapper[4745]: I0121 11:58:25.000186 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 11:58:26 crc kubenswrapper[4745]: I0121 11:58:26.433746 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0"} Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.732178 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84"] Jan 21 12:00:00 crc kubenswrapper[4745]: E0121 12:00:00.733335 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.733355 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4745]: E0121 12:00:00.733374 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.733380 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="extract-content" Jan 21 12:00:00 crc kubenswrapper[4745]: E0121 12:00:00.733407 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.733421 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="extract-utilities" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.733725 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="780ab31e-7972-439a-b72d-e74503fa84ab" containerName="registry-server" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.734517 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.742724 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.742737 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.745570 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84"] Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.845251 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.845406 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7s77\" (UniqueName: \"kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.845437 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.947128 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7s77\" (UniqueName: \"kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.947198 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.947269 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.949923 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.959741 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:00 crc kubenswrapper[4745]: I0121 12:00:00.965409 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7s77\" (UniqueName: \"kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77\") pod \"collect-profiles-29483280-cnd84\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:01 crc kubenswrapper[4745]: I0121 12:00:01.062523 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:01 crc kubenswrapper[4745]: I0121 12:00:01.531711 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84"] Jan 21 12:00:02 crc kubenswrapper[4745]: I0121 12:00:02.365827 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" event={"ID":"673fc212-6fed-4d90-9b92-7d6e1c9fecf5","Type":"ContainerStarted","Data":"c9cd82307203a2cfae47f14626011b610fd569aacd3f899ccda317a914a5e771"} Jan 21 12:00:03 crc kubenswrapper[4745]: I0121 12:00:03.375984 4745 generic.go:334] "Generic (PLEG): container finished" podID="673fc212-6fed-4d90-9b92-7d6e1c9fecf5" containerID="37e17c2eeefc52b5a34e8ba5173f5bf9405ed9a3fbea58e672a721ed177de78c" exitCode=0 Jan 21 12:00:03 crc kubenswrapper[4745]: I0121 12:00:03.376064 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" event={"ID":"673fc212-6fed-4d90-9b92-7d6e1c9fecf5","Type":"ContainerDied","Data":"37e17c2eeefc52b5a34e8ba5173f5bf9405ed9a3fbea58e672a721ed177de78c"} Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.075933 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.177511 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7s77\" (UniqueName: \"kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77\") pod \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.177583 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume\") pod \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.177703 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume\") pod \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\" (UID: \"673fc212-6fed-4d90-9b92-7d6e1c9fecf5\") " Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.178294 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume" (OuterVolumeSpecName: "config-volume") pod "673fc212-6fed-4d90-9b92-7d6e1c9fecf5" (UID: "673fc212-6fed-4d90-9b92-7d6e1c9fecf5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.184097 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "673fc212-6fed-4d90-9b92-7d6e1c9fecf5" (UID: "673fc212-6fed-4d90-9b92-7d6e1c9fecf5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.184329 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77" (OuterVolumeSpecName: "kube-api-access-n7s77") pod "673fc212-6fed-4d90-9b92-7d6e1c9fecf5" (UID: "673fc212-6fed-4d90-9b92-7d6e1c9fecf5"). InnerVolumeSpecName "kube-api-access-n7s77". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.280766 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.280823 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7s77\" (UniqueName: \"kubernetes.io/projected/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-kube-api-access-n7s77\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.280835 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/673fc212-6fed-4d90-9b92-7d6e1c9fecf5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.396122 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" event={"ID":"673fc212-6fed-4d90-9b92-7d6e1c9fecf5","Type":"ContainerDied","Data":"c9cd82307203a2cfae47f14626011b610fd569aacd3f899ccda317a914a5e771"} Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.396160 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9cd82307203a2cfae47f14626011b610fd569aacd3f899ccda317a914a5e771" Jan 21 12:00:05 crc kubenswrapper[4745]: I0121 12:00:05.396217 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84" Jan 21 12:00:06 crc kubenswrapper[4745]: I0121 12:00:06.158857 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2"] Jan 21 12:00:06 crc kubenswrapper[4745]: I0121 12:00:06.166127 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483235-5mmr2"] Jan 21 12:00:08 crc kubenswrapper[4745]: I0121 12:00:08.012051 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cc744e-e212-4893-9fcb-60a835f3d83d" path="/var/lib/kubelet/pods/d6cc744e-e212-4893-9fcb-60a835f3d83d/volumes" Jan 21 12:00:45 crc kubenswrapper[4745]: I0121 12:00:45.866126 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:00:45 crc kubenswrapper[4745]: I0121 12:00:45.866771 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:00:58 crc kubenswrapper[4745]: I0121 12:00:58.095137 4745 scope.go:117] "RemoveContainer" containerID="ef6c8a271cbdb56df63572886515e9549dd928f532da9240c148b8a04273966a" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.167485 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29483281-8vl8f"] Jan 21 12:01:00 crc kubenswrapper[4745]: E0121 12:01:00.168728 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="673fc212-6fed-4d90-9b92-7d6e1c9fecf5" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.168748 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="673fc212-6fed-4d90-9b92-7d6e1c9fecf5" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.168989 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="673fc212-6fed-4d90-9b92-7d6e1c9fecf5" containerName="collect-profiles" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.169783 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.189488 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-8vl8f"] Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.281284 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.281428 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xwv\" (UniqueName: \"kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.281465 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.281687 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.383113 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.383203 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.383256 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xwv\" (UniqueName: \"kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.383275 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.390516 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.390932 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.393019 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.409158 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xwv\" (UniqueName: \"kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv\") pod \"keystone-cron-29483281-8vl8f\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.494240 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:00 crc kubenswrapper[4745]: I0121 12:01:00.997948 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29483281-8vl8f"] Jan 21 12:01:01 crc kubenswrapper[4745]: I0121 12:01:01.998992 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-8vl8f" event={"ID":"c5e0d2f0-c75a-43d7-bed6-b120867ccf85","Type":"ContainerStarted","Data":"8697d8f7a71e2d6aae5d409e9c7c8bbe0c94dd298ae9d629ba61df11b6e2bd43"} Jan 21 12:01:02 crc kubenswrapper[4745]: I0121 12:01:02.012340 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-8vl8f" event={"ID":"c5e0d2f0-c75a-43d7-bed6-b120867ccf85","Type":"ContainerStarted","Data":"5ca971eee628a247044550bbbb50606a8bdaea25795409a9c0762d4d7b1c4d93"} Jan 21 12:01:02 crc kubenswrapper[4745]: I0121 12:01:02.027052 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29483281-8vl8f" podStartSLOduration=2.027029882 podStartE2EDuration="2.027029882s" podCreationTimestamp="2026-01-21 12:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:01:02.019324101 +0000 UTC m=+5046.480111709" watchObservedRunningTime="2026-01-21 12:01:02.027029882 +0000 UTC m=+5046.487817480" Jan 21 12:01:05 crc kubenswrapper[4745]: I0121 12:01:05.033839 4745 generic.go:334] "Generic (PLEG): container finished" podID="c5e0d2f0-c75a-43d7-bed6-b120867ccf85" containerID="8697d8f7a71e2d6aae5d409e9c7c8bbe0c94dd298ae9d629ba61df11b6e2bd43" exitCode=0 Jan 21 12:01:05 crc kubenswrapper[4745]: I0121 12:01:05.033943 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-8vl8f" event={"ID":"c5e0d2f0-c75a-43d7-bed6-b120867ccf85","Type":"ContainerDied","Data":"8697d8f7a71e2d6aae5d409e9c7c8bbe0c94dd298ae9d629ba61df11b6e2bd43"} Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.583937 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.641814 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data\") pod \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.641870 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle\") pod \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.642009 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys\") pod \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.649739 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c5e0d2f0-c75a-43d7-bed6-b120867ccf85" (UID: "c5e0d2f0-c75a-43d7-bed6-b120867ccf85"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.687908 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e0d2f0-c75a-43d7-bed6-b120867ccf85" (UID: "c5e0d2f0-c75a-43d7-bed6-b120867ccf85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.731612 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data" (OuterVolumeSpecName: "config-data") pod "c5e0d2f0-c75a-43d7-bed6-b120867ccf85" (UID: "c5e0d2f0-c75a-43d7-bed6-b120867ccf85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.743391 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62xwv\" (UniqueName: \"kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv\") pod \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\" (UID: \"c5e0d2f0-c75a-43d7-bed6-b120867ccf85\") " Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.744067 4745 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.744083 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.744093 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.746514 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv" (OuterVolumeSpecName: "kube-api-access-62xwv") pod "c5e0d2f0-c75a-43d7-bed6-b120867ccf85" (UID: "c5e0d2f0-c75a-43d7-bed6-b120867ccf85"). InnerVolumeSpecName "kube-api-access-62xwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:01:06 crc kubenswrapper[4745]: I0121 12:01:06.846024 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62xwv\" (UniqueName: \"kubernetes.io/projected/c5e0d2f0-c75a-43d7-bed6-b120867ccf85-kube-api-access-62xwv\") on node \"crc\" DevicePath \"\"" Jan 21 12:01:07 crc kubenswrapper[4745]: I0121 12:01:07.059808 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29483281-8vl8f" event={"ID":"c5e0d2f0-c75a-43d7-bed6-b120867ccf85","Type":"ContainerDied","Data":"5ca971eee628a247044550bbbb50606a8bdaea25795409a9c0762d4d7b1c4d93"} Jan 21 12:01:07 crc kubenswrapper[4745]: I0121 12:01:07.060178 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ca971eee628a247044550bbbb50606a8bdaea25795409a9c0762d4d7b1c4d93" Jan 21 12:01:07 crc kubenswrapper[4745]: I0121 12:01:07.059931 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29483281-8vl8f" Jan 21 12:01:15 crc kubenswrapper[4745]: I0121 12:01:15.866765 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:01:15 crc kubenswrapper[4745]: I0121 12:01:15.867409 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:01:45 crc kubenswrapper[4745]: I0121 12:01:45.866869 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:01:45 crc kubenswrapper[4745]: I0121 12:01:45.868325 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:01:45 crc kubenswrapper[4745]: I0121 12:01:45.868435 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:01:45 crc kubenswrapper[4745]: I0121 12:01:45.869256 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:01:45 crc kubenswrapper[4745]: I0121 12:01:45.869412 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0" gracePeriod=600 Jan 21 12:01:46 crc kubenswrapper[4745]: I0121 12:01:46.452058 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0" exitCode=0 Jan 21 12:01:46 crc kubenswrapper[4745]: I0121 12:01:46.452147 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0"} Jan 21 12:01:46 crc kubenswrapper[4745]: I0121 12:01:46.452918 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1"} Jan 21 12:01:46 crc kubenswrapper[4745]: I0121 12:01:46.452966 4745 scope.go:117] "RemoveContainer" containerID="b21cb39b047d9c4ccc5c337fdeafda8a9fdcf200805fc9c4e0c7610b1303178a" Jan 21 12:04:15 crc kubenswrapper[4745]: I0121 12:04:15.866661 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:04:15 crc kubenswrapper[4745]: I0121 12:04:15.867278 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:04:45 crc kubenswrapper[4745]: I0121 12:04:45.866953 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:04:45 crc kubenswrapper[4745]: I0121 12:04:45.867562 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:05:15 crc kubenswrapper[4745]: I0121 12:05:15.866740 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:05:15 crc kubenswrapper[4745]: I0121 12:05:15.867514 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:05:15 crc kubenswrapper[4745]: I0121 12:05:15.867605 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:05:15 crc kubenswrapper[4745]: I0121 12:05:15.868695 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:05:15 crc kubenswrapper[4745]: I0121 12:05:15.868788 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" gracePeriod=600 Jan 21 12:05:16 crc kubenswrapper[4745]: E0121 12:05:16.020403 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:05:16 crc kubenswrapper[4745]: I0121 12:05:16.683102 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" exitCode=0 Jan 21 12:05:16 crc kubenswrapper[4745]: I0121 12:05:16.683177 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1"} Jan 21 12:05:16 crc kubenswrapper[4745]: I0121 12:05:16.683323 4745 scope.go:117] "RemoveContainer" containerID="6041e75a049a8aca05b8c1af6df8ce309bfbdf15bbb79adc5b2733e55ddebaf0" Jan 21 12:05:16 crc kubenswrapper[4745]: I0121 12:05:16.684101 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:05:16 crc kubenswrapper[4745]: E0121 12:05:16.684418 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:05:29 crc kubenswrapper[4745]: I0121 12:05:29.000708 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:05:29 crc kubenswrapper[4745]: E0121 12:05:29.001592 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.294690 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:35 crc kubenswrapper[4745]: E0121 12:05:35.295794 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e0d2f0-c75a-43d7-bed6-b120867ccf85" containerName="keystone-cron" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.295812 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e0d2f0-c75a-43d7-bed6-b120867ccf85" containerName="keystone-cron" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.296053 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e0d2f0-c75a-43d7-bed6-b120867ccf85" containerName="keystone-cron" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.297790 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.307843 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.369020 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.369204 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8gfz\" (UniqueName: \"kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.369245 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.471069 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8gfz\" (UniqueName: \"kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.471125 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.471218 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.471738 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.472060 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.502158 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8gfz\" (UniqueName: \"kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz\") pod \"redhat-marketplace-dmkm5\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:35 crc kubenswrapper[4745]: I0121 12:05:35.617966 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:36 crc kubenswrapper[4745]: I0121 12:05:36.133825 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:36 crc kubenswrapper[4745]: I0121 12:05:36.942623 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerStarted","Data":"3da9191346f8939e30d3a493fd2c2067d284b062ed6bdcbeb09874d5370f9ee5"} Jan 21 12:05:37 crc kubenswrapper[4745]: I0121 12:05:37.953677 4745 generic.go:334] "Generic (PLEG): container finished" podID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerID="b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525" exitCode=0 Jan 21 12:05:37 crc kubenswrapper[4745]: I0121 12:05:37.953751 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerDied","Data":"b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525"} Jan 21 12:05:38 crc kubenswrapper[4745]: I0121 12:05:38.965330 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:05:40 crc kubenswrapper[4745]: I0121 12:05:40.000325 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:05:40 crc kubenswrapper[4745]: E0121 12:05:40.000801 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:05:41 crc kubenswrapper[4745]: I0121 12:05:41.990338 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerStarted","Data":"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab"} Jan 21 12:05:43 crc kubenswrapper[4745]: I0121 12:05:43.020952 4745 generic.go:334] "Generic (PLEG): container finished" podID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerID="b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab" exitCode=0 Jan 21 12:05:43 crc kubenswrapper[4745]: I0121 12:05:43.021165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerDied","Data":"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab"} Jan 21 12:05:44 crc kubenswrapper[4745]: I0121 12:05:44.032341 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerStarted","Data":"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050"} Jan 21 12:05:44 crc kubenswrapper[4745]: I0121 12:05:44.066402 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dmkm5" podStartSLOduration=4.592004827 podStartE2EDuration="9.066375509s" podCreationTimestamp="2026-01-21 12:05:35 +0000 UTC" firstStartedPulling="2026-01-21 12:05:38.964851667 +0000 UTC m=+5323.425639265" lastFinishedPulling="2026-01-21 12:05:43.439222349 +0000 UTC m=+5327.900009947" observedRunningTime="2026-01-21 12:05:44.053210894 +0000 UTC m=+5328.513998482" watchObservedRunningTime="2026-01-21 12:05:44.066375509 +0000 UTC m=+5328.527163107" Jan 21 12:05:45 crc kubenswrapper[4745]: I0121 12:05:45.618621 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:45 crc kubenswrapper[4745]: I0121 12:05:45.619851 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:46 crc kubenswrapper[4745]: I0121 12:05:46.668728 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-dmkm5" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="registry-server" probeResult="failure" output=< Jan 21 12:05:46 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:05:46 crc kubenswrapper[4745]: > Jan 21 12:05:51 crc kubenswrapper[4745]: I0121 12:05:51.000828 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:05:51 crc kubenswrapper[4745]: E0121 12:05:51.001759 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:05:55 crc kubenswrapper[4745]: I0121 12:05:55.671330 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:55 crc kubenswrapper[4745]: I0121 12:05:55.726673 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:55 crc kubenswrapper[4745]: I0121 12:05:55.914079 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.143039 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-dmkm5" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="registry-server" containerID="cri-o://71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050" gracePeriod=2 Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.663012 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.747941 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content\") pod \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.748655 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities\") pod \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.748808 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8gfz\" (UniqueName: \"kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz\") pod \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\" (UID: \"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad\") " Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.749260 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities" (OuterVolumeSpecName: "utilities") pod "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" (UID: "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.749398 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.766306 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz" (OuterVolumeSpecName: "kube-api-access-d8gfz") pod "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" (UID: "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad"). InnerVolumeSpecName "kube-api-access-d8gfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.784750 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" (UID: "b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.851271 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:05:57 crc kubenswrapper[4745]: I0121 12:05:57.851301 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8gfz\" (UniqueName: \"kubernetes.io/projected/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad-kube-api-access-d8gfz\") on node \"crc\" DevicePath \"\"" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.153051 4745 generic.go:334] "Generic (PLEG): container finished" podID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerID="71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050" exitCode=0 Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.153099 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerDied","Data":"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050"} Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.153435 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dmkm5" event={"ID":"b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad","Type":"ContainerDied","Data":"3da9191346f8939e30d3a493fd2c2067d284b062ed6bdcbeb09874d5370f9ee5"} Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.153464 4745 scope.go:117] "RemoveContainer" containerID="71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.153149 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dmkm5" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.186724 4745 scope.go:117] "RemoveContainer" containerID="b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.187790 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.202200 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-dmkm5"] Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.209057 4745 scope.go:117] "RemoveContainer" containerID="b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.290357 4745 scope.go:117] "RemoveContainer" containerID="71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050" Jan 21 12:05:58 crc kubenswrapper[4745]: E0121 12:05:58.291074 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050\": container with ID starting with 71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050 not found: ID does not exist" containerID="71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.291136 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050"} err="failed to get container status \"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050\": rpc error: code = NotFound desc = could not find container \"71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050\": container with ID starting with 71d8820d647ce7a48aeca29f7f4d09fd7f3002303f8205bde5ac309caca7d050 not found: ID does not exist" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.291168 4745 scope.go:117] "RemoveContainer" containerID="b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab" Jan 21 12:05:58 crc kubenswrapper[4745]: E0121 12:05:58.291886 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab\": container with ID starting with b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab not found: ID does not exist" containerID="b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.291911 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab"} err="failed to get container status \"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab\": rpc error: code = NotFound desc = could not find container \"b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab\": container with ID starting with b9c9f3d18b92321de8d6d5930ac93afe3c62e294d67d25ac08e0eb37a712fcab not found: ID does not exist" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.291928 4745 scope.go:117] "RemoveContainer" containerID="b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525" Jan 21 12:05:58 crc kubenswrapper[4745]: E0121 12:05:58.292589 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525\": container with ID starting with b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525 not found: ID does not exist" containerID="b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525" Jan 21 12:05:58 crc kubenswrapper[4745]: I0121 12:05:58.292613 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525"} err="failed to get container status \"b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525\": rpc error: code = NotFound desc = could not find container \"b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525\": container with ID starting with b24ae3aefedb5ee7e21f35457dc582aa0d163f72525489c954b3063e993d5525 not found: ID does not exist" Jan 21 12:06:00 crc kubenswrapper[4745]: I0121 12:06:00.011926 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" path="/var/lib/kubelet/pods/b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad/volumes" Jan 21 12:06:02 crc kubenswrapper[4745]: I0121 12:06:02.000860 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:06:02 crc kubenswrapper[4745]: E0121 12:06:02.001327 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:06:17 crc kubenswrapper[4745]: I0121 12:06:17.000114 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:06:17 crc kubenswrapper[4745]: E0121 12:06:17.001856 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.747773 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:20 crc kubenswrapper[4745]: E0121 12:06:20.749085 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="extract-utilities" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.749099 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="extract-utilities" Jan 21 12:06:20 crc kubenswrapper[4745]: E0121 12:06:20.749115 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="extract-content" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.749123 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="extract-content" Jan 21 12:06:20 crc kubenswrapper[4745]: E0121 12:06:20.749161 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="registry-server" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.749167 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="registry-server" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.749556 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b558c84f-8626-4ceb-9f13-dd2ac1cbf6ad" containerName="registry-server" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.792674 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.806631 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.957951 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.958255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:20 crc kubenswrapper[4745]: I0121 12:06:20.958398 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tqv\" (UniqueName: \"kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.060416 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98tqv\" (UniqueName: \"kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.060623 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.060664 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.061070 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.061625 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.101239 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98tqv\" (UniqueName: \"kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv\") pod \"certified-operators-knxxb\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.132062 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:21 crc kubenswrapper[4745]: I0121 12:06:21.824439 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:22 crc kubenswrapper[4745]: I0121 12:06:22.419226 4745 generic.go:334] "Generic (PLEG): container finished" podID="72099449-9921-4861-a2a8-09a49da318ad" containerID="971fe4b7577f81370c392b176e726342f345eacb34fe0a13e5649b9dbabc3ac8" exitCode=0 Jan 21 12:06:22 crc kubenswrapper[4745]: I0121 12:06:22.419379 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerDied","Data":"971fe4b7577f81370c392b176e726342f345eacb34fe0a13e5649b9dbabc3ac8"} Jan 21 12:06:22 crc kubenswrapper[4745]: I0121 12:06:22.419496 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerStarted","Data":"a105e7f393150de59acfb026d7a5f674fe3b16178f884cefb8925976b8887537"} Jan 21 12:06:24 crc kubenswrapper[4745]: I0121 12:06:24.453700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerStarted","Data":"c80737255adf601ed8b27beafe220d7d5e6a0278a318df24cf5240dde39e61a5"} Jan 21 12:06:25 crc kubenswrapper[4745]: I0121 12:06:25.468486 4745 generic.go:334] "Generic (PLEG): container finished" podID="72099449-9921-4861-a2a8-09a49da318ad" containerID="c80737255adf601ed8b27beafe220d7d5e6a0278a318df24cf5240dde39e61a5" exitCode=0 Jan 21 12:06:25 crc kubenswrapper[4745]: I0121 12:06:25.468603 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerDied","Data":"c80737255adf601ed8b27beafe220d7d5e6a0278a318df24cf5240dde39e61a5"} Jan 21 12:06:26 crc kubenswrapper[4745]: I0121 12:06:26.484412 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerStarted","Data":"d8a3006184eaeb34123e2b2a4d43282375c68c57ab39330da6b326c72721ff9e"} Jan 21 12:06:26 crc kubenswrapper[4745]: I0121 12:06:26.516449 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-knxxb" podStartSLOduration=3.027954825 podStartE2EDuration="6.516424343s" podCreationTimestamp="2026-01-21 12:06:20 +0000 UTC" firstStartedPulling="2026-01-21 12:06:22.421113758 +0000 UTC m=+5366.881901356" lastFinishedPulling="2026-01-21 12:06:25.909583276 +0000 UTC m=+5370.370370874" observedRunningTime="2026-01-21 12:06:26.514839489 +0000 UTC m=+5370.975627087" watchObservedRunningTime="2026-01-21 12:06:26.516424343 +0000 UTC m=+5370.977211941" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.513105 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.515440 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.527171 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.684557 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.684996 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.685172 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mnrb\" (UniqueName: \"kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.786963 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.787039 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mnrb\" (UniqueName: \"kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.787156 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.787482 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.787616 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.820300 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mnrb\" (UniqueName: \"kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb\") pod \"redhat-operators-nx6br\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:27 crc kubenswrapper[4745]: I0121 12:06:27.848211 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:28 crc kubenswrapper[4745]: I0121 12:06:28.471122 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:06:28 crc kubenswrapper[4745]: I0121 12:06:28.501008 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerStarted","Data":"95dc386886c0922d9b422144bf4de3621236a5fce147db692f4529e903285c86"} Jan 21 12:06:29 crc kubenswrapper[4745]: I0121 12:06:29.001028 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:06:29 crc kubenswrapper[4745]: E0121 12:06:29.001506 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:06:29 crc kubenswrapper[4745]: I0121 12:06:29.511763 4745 generic.go:334] "Generic (PLEG): container finished" podID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerID="f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281" exitCode=0 Jan 21 12:06:29 crc kubenswrapper[4745]: I0121 12:06:29.511815 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerDied","Data":"f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281"} Jan 21 12:06:31 crc kubenswrapper[4745]: I0121 12:06:31.134219 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:31 crc kubenswrapper[4745]: I0121 12:06:31.134641 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:31 crc kubenswrapper[4745]: I0121 12:06:31.200586 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:31 crc kubenswrapper[4745]: I0121 12:06:31.535288 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerStarted","Data":"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091"} Jan 21 12:06:31 crc kubenswrapper[4745]: I0121 12:06:31.589584 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:33 crc kubenswrapper[4745]: I0121 12:06:33.258785 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:33 crc kubenswrapper[4745]: I0121 12:06:33.550865 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-knxxb" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="registry-server" containerID="cri-o://d8a3006184eaeb34123e2b2a4d43282375c68c57ab39330da6b326c72721ff9e" gracePeriod=2 Jan 21 12:06:34 crc kubenswrapper[4745]: I0121 12:06:34.561222 4745 generic.go:334] "Generic (PLEG): container finished" podID="72099449-9921-4861-a2a8-09a49da318ad" containerID="d8a3006184eaeb34123e2b2a4d43282375c68c57ab39330da6b326c72721ff9e" exitCode=0 Jan 21 12:06:34 crc kubenswrapper[4745]: I0121 12:06:34.561294 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerDied","Data":"d8a3006184eaeb34123e2b2a4d43282375c68c57ab39330da6b326c72721ff9e"} Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.398162 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.504286 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98tqv\" (UniqueName: \"kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv\") pod \"72099449-9921-4861-a2a8-09a49da318ad\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.504755 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities\") pod \"72099449-9921-4861-a2a8-09a49da318ad\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.504917 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content\") pod \"72099449-9921-4861-a2a8-09a49da318ad\" (UID: \"72099449-9921-4861-a2a8-09a49da318ad\") " Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.505363 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities" (OuterVolumeSpecName: "utilities") pod "72099449-9921-4861-a2a8-09a49da318ad" (UID: "72099449-9921-4861-a2a8-09a49da318ad"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.505911 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.511060 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv" (OuterVolumeSpecName: "kube-api-access-98tqv") pod "72099449-9921-4861-a2a8-09a49da318ad" (UID: "72099449-9921-4861-a2a8-09a49da318ad"). InnerVolumeSpecName "kube-api-access-98tqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.542561 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "72099449-9921-4861-a2a8-09a49da318ad" (UID: "72099449-9921-4861-a2a8-09a49da318ad"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.596339 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-knxxb" event={"ID":"72099449-9921-4861-a2a8-09a49da318ad","Type":"ContainerDied","Data":"a105e7f393150de59acfb026d7a5f674fe3b16178f884cefb8925976b8887537"} Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.596401 4745 scope.go:117] "RemoveContainer" containerID="d8a3006184eaeb34123e2b2a4d43282375c68c57ab39330da6b326c72721ff9e" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.596678 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-knxxb" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.609080 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/72099449-9921-4861-a2a8-09a49da318ad-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.609110 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98tqv\" (UniqueName: \"kubernetes.io/projected/72099449-9921-4861-a2a8-09a49da318ad-kube-api-access-98tqv\") on node \"crc\" DevicePath \"\"" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.639709 4745 scope.go:117] "RemoveContainer" containerID="c80737255adf601ed8b27beafe220d7d5e6a0278a318df24cf5240dde39e61a5" Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.644827 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.652586 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-knxxb"] Jan 21 12:06:37 crc kubenswrapper[4745]: I0121 12:06:37.740838 4745 scope.go:117] "RemoveContainer" containerID="971fe4b7577f81370c392b176e726342f345eacb34fe0a13e5649b9dbabc3ac8" Jan 21 12:06:38 crc kubenswrapper[4745]: I0121 12:06:38.012669 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72099449-9921-4861-a2a8-09a49da318ad" path="/var/lib/kubelet/pods/72099449-9921-4861-a2a8-09a49da318ad/volumes" Jan 21 12:06:38 crc kubenswrapper[4745]: I0121 12:06:38.606482 4745 generic.go:334] "Generic (PLEG): container finished" podID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerID="5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091" exitCode=0 Jan 21 12:06:38 crc kubenswrapper[4745]: I0121 12:06:38.606613 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerDied","Data":"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091"} Jan 21 12:06:39 crc kubenswrapper[4745]: I0121 12:06:39.618435 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerStarted","Data":"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0"} Jan 21 12:06:40 crc kubenswrapper[4745]: I0121 12:06:40.648909 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nx6br" podStartSLOduration=3.919228864 podStartE2EDuration="13.648885272s" podCreationTimestamp="2026-01-21 12:06:27 +0000 UTC" firstStartedPulling="2026-01-21 12:06:29.513574156 +0000 UTC m=+5373.974361754" lastFinishedPulling="2026-01-21 12:06:39.243230564 +0000 UTC m=+5383.704018162" observedRunningTime="2026-01-21 12:06:40.641783595 +0000 UTC m=+5385.102571193" watchObservedRunningTime="2026-01-21 12:06:40.648885272 +0000 UTC m=+5385.109672870" Jan 21 12:06:44 crc kubenswrapper[4745]: I0121 12:06:44.000258 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:06:44 crc kubenswrapper[4745]: E0121 12:06:44.001068 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:06:47 crc kubenswrapper[4745]: I0121 12:06:47.848765 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:47 crc kubenswrapper[4745]: I0121 12:06:47.849353 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:48 crc kubenswrapper[4745]: I0121 12:06:48.899500 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nx6br" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="registry-server" probeResult="failure" output=< Jan 21 12:06:48 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:06:48 crc kubenswrapper[4745]: > Jan 21 12:06:56 crc kubenswrapper[4745]: I0121 12:06:56.044774 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:06:56 crc kubenswrapper[4745]: E0121 12:06:56.045883 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:06:57 crc kubenswrapper[4745]: I0121 12:06:57.913625 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:57 crc kubenswrapper[4745]: I0121 12:06:57.973120 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:06:58 crc kubenswrapper[4745]: I0121 12:06:58.711547 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:06:59 crc kubenswrapper[4745]: I0121 12:06:59.829363 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nx6br" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="registry-server" containerID="cri-o://5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0" gracePeriod=2 Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.466726 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.567127 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities\") pod \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.567383 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content\") pod \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.567474 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mnrb\" (UniqueName: \"kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb\") pod \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\" (UID: \"e39ac86d-3d61-4ac0-8054-aab2a202f51a\") " Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.569182 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities" (OuterVolumeSpecName: "utilities") pod "e39ac86d-3d61-4ac0-8054-aab2a202f51a" (UID: "e39ac86d-3d61-4ac0-8054-aab2a202f51a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.574500 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb" (OuterVolumeSpecName: "kube-api-access-4mnrb") pod "e39ac86d-3d61-4ac0-8054-aab2a202f51a" (UID: "e39ac86d-3d61-4ac0-8054-aab2a202f51a"). InnerVolumeSpecName "kube-api-access-4mnrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.669976 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mnrb\" (UniqueName: \"kubernetes.io/projected/e39ac86d-3d61-4ac0-8054-aab2a202f51a-kube-api-access-4mnrb\") on node \"crc\" DevicePath \"\"" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.670441 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.707759 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e39ac86d-3d61-4ac0-8054-aab2a202f51a" (UID: "e39ac86d-3d61-4ac0-8054-aab2a202f51a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.771990 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e39ac86d-3d61-4ac0-8054-aab2a202f51a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.860786 4745 generic.go:334] "Generic (PLEG): container finished" podID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerID="5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0" exitCode=0 Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.860852 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerDied","Data":"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0"} Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.860878 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nx6br" event={"ID":"e39ac86d-3d61-4ac0-8054-aab2a202f51a","Type":"ContainerDied","Data":"95dc386886c0922d9b422144bf4de3621236a5fce147db692f4529e903285c86"} Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.860895 4745 scope.go:117] "RemoveContainer" containerID="5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.861014 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nx6br" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.895970 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.899616 4745 scope.go:117] "RemoveContainer" containerID="5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.903752 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nx6br"] Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.927336 4745 scope.go:117] "RemoveContainer" containerID="f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.962404 4745 scope.go:117] "RemoveContainer" containerID="5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0" Jan 21 12:07:00 crc kubenswrapper[4745]: E0121 12:07:00.963622 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0\": container with ID starting with 5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0 not found: ID does not exist" containerID="5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.963680 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0"} err="failed to get container status \"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0\": rpc error: code = NotFound desc = could not find container \"5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0\": container with ID starting with 5b75344e6431423b2e70a9a360d5d3b1c029ff4aa556dbb177d4fecfdd27d5e0 not found: ID does not exist" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.963710 4745 scope.go:117] "RemoveContainer" containerID="5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091" Jan 21 12:07:00 crc kubenswrapper[4745]: E0121 12:07:00.964354 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091\": container with ID starting with 5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091 not found: ID does not exist" containerID="5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.964415 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091"} err="failed to get container status \"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091\": rpc error: code = NotFound desc = could not find container \"5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091\": container with ID starting with 5f2f1095b2e85eb941a4ebc414f957a62b7de6d096744d672bb32428ec7dc091 not found: ID does not exist" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.964447 4745 scope.go:117] "RemoveContainer" containerID="f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281" Jan 21 12:07:00 crc kubenswrapper[4745]: E0121 12:07:00.964907 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281\": container with ID starting with f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281 not found: ID does not exist" containerID="f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281" Jan 21 12:07:00 crc kubenswrapper[4745]: I0121 12:07:00.964949 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281"} err="failed to get container status \"f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281\": rpc error: code = NotFound desc = could not find container \"f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281\": container with ID starting with f2de09055acea96bad3f930911ff2ddab9f4854b2e5dc00c87bec99857a3c281 not found: ID does not exist" Jan 21 12:07:02 crc kubenswrapper[4745]: I0121 12:07:02.016785 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" path="/var/lib/kubelet/pods/e39ac86d-3d61-4ac0-8054-aab2a202f51a/volumes" Jan 21 12:07:08 crc kubenswrapper[4745]: I0121 12:07:08.000042 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:07:08 crc kubenswrapper[4745]: E0121 12:07:08.001029 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:07:20 crc kubenswrapper[4745]: I0121 12:07:20.000817 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:07:20 crc kubenswrapper[4745]: E0121 12:07:20.001559 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:07:33 crc kubenswrapper[4745]: I0121 12:07:33.002049 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:07:33 crc kubenswrapper[4745]: E0121 12:07:33.002794 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:07:46 crc kubenswrapper[4745]: I0121 12:07:46.010475 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:07:46 crc kubenswrapper[4745]: E0121 12:07:46.011259 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:07:58 crc kubenswrapper[4745]: I0121 12:07:58.000398 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:07:58 crc kubenswrapper[4745]: E0121 12:07:58.001207 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:08:12 crc kubenswrapper[4745]: I0121 12:08:12.000198 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:08:12 crc kubenswrapper[4745]: E0121 12:08:12.001684 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.363605 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364389 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="extract-utilities" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364425 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="extract-utilities" Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364448 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364459 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364470 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364478 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364490 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="extract-utilities" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364498 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="extract-utilities" Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364512 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="extract-content" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364519 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="extract-content" Jan 21 12:08:15 crc kubenswrapper[4745]: E0121 12:08:15.364550 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="extract-content" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364558 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="extract-content" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364791 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="72099449-9921-4861-a2a8-09a49da318ad" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.364824 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e39ac86d-3d61-4ac0-8054-aab2a202f51a" containerName="registry-server" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.367934 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.390916 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.478590 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.478711 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqcsj\" (UniqueName: \"kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.478991 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.580625 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.580745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.580763 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqcsj\" (UniqueName: \"kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.581167 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.581194 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.601541 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqcsj\" (UniqueName: \"kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj\") pod \"community-operators-ppf2p\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:15 crc kubenswrapper[4745]: I0121 12:08:15.690427 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:16 crc kubenswrapper[4745]: I0121 12:08:16.393586 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:16 crc kubenswrapper[4745]: I0121 12:08:16.533793 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerStarted","Data":"a32cc9175e113ec141039dfc9da78e353fa4aa180e45397283a128faaceca896"} Jan 21 12:08:17 crc kubenswrapper[4745]: I0121 12:08:17.544068 4745 generic.go:334] "Generic (PLEG): container finished" podID="413d9c76-cc03-4046-b8df-bd9c52633539" containerID="c950958c352d8a9c2abd9e0b7255cb6c127d36e2d217ad42db766c20f49b0691" exitCode=0 Jan 21 12:08:17 crc kubenswrapper[4745]: I0121 12:08:17.544139 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerDied","Data":"c950958c352d8a9c2abd9e0b7255cb6c127d36e2d217ad42db766c20f49b0691"} Jan 21 12:08:19 crc kubenswrapper[4745]: I0121 12:08:19.571870 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerStarted","Data":"b3320d243866d8b6b64c24e57cd3f4e8099cf714b8832c3d572345193fadba4d"} Jan 21 12:08:20 crc kubenswrapper[4745]: I0121 12:08:20.584037 4745 generic.go:334] "Generic (PLEG): container finished" podID="413d9c76-cc03-4046-b8df-bd9c52633539" containerID="b3320d243866d8b6b64c24e57cd3f4e8099cf714b8832c3d572345193fadba4d" exitCode=0 Jan 21 12:08:20 crc kubenswrapper[4745]: I0121 12:08:20.584155 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerDied","Data":"b3320d243866d8b6b64c24e57cd3f4e8099cf714b8832c3d572345193fadba4d"} Jan 21 12:08:21 crc kubenswrapper[4745]: I0121 12:08:21.598163 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerStarted","Data":"9292b973879856dd7b8491ee5745969b46b9b1f7d16af5991bf3aab8c164df8b"} Jan 21 12:08:21 crc kubenswrapper[4745]: I0121 12:08:21.624360 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ppf2p" podStartSLOduration=3.164049152 podStartE2EDuration="6.624338388s" podCreationTimestamp="2026-01-21 12:08:15 +0000 UTC" firstStartedPulling="2026-01-21 12:08:17.546345934 +0000 UTC m=+5482.007133532" lastFinishedPulling="2026-01-21 12:08:21.00663517 +0000 UTC m=+5485.467422768" observedRunningTime="2026-01-21 12:08:21.617323884 +0000 UTC m=+5486.078111492" watchObservedRunningTime="2026-01-21 12:08:21.624338388 +0000 UTC m=+5486.085125986" Jan 21 12:08:25 crc kubenswrapper[4745]: I0121 12:08:25.691316 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:25 crc kubenswrapper[4745]: I0121 12:08:25.691913 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:25 crc kubenswrapper[4745]: I0121 12:08:25.746700 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:26 crc kubenswrapper[4745]: I0121 12:08:26.007505 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:08:26 crc kubenswrapper[4745]: E0121 12:08:26.007782 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:08:26 crc kubenswrapper[4745]: I0121 12:08:26.706080 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:26 crc kubenswrapper[4745]: I0121 12:08:26.759635 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:28 crc kubenswrapper[4745]: I0121 12:08:28.663977 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ppf2p" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="registry-server" containerID="cri-o://9292b973879856dd7b8491ee5745969b46b9b1f7d16af5991bf3aab8c164df8b" gracePeriod=2 Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.677872 4745 generic.go:334] "Generic (PLEG): container finished" podID="413d9c76-cc03-4046-b8df-bd9c52633539" containerID="9292b973879856dd7b8491ee5745969b46b9b1f7d16af5991bf3aab8c164df8b" exitCode=0 Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.677916 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerDied","Data":"9292b973879856dd7b8491ee5745969b46b9b1f7d16af5991bf3aab8c164df8b"} Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.816095 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.918097 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqcsj\" (UniqueName: \"kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj\") pod \"413d9c76-cc03-4046-b8df-bd9c52633539\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.918512 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content\") pod \"413d9c76-cc03-4046-b8df-bd9c52633539\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.918676 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities\") pod \"413d9c76-cc03-4046-b8df-bd9c52633539\" (UID: \"413d9c76-cc03-4046-b8df-bd9c52633539\") " Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.920260 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities" (OuterVolumeSpecName: "utilities") pod "413d9c76-cc03-4046-b8df-bd9c52633539" (UID: "413d9c76-cc03-4046-b8df-bd9c52633539"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.923921 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj" (OuterVolumeSpecName: "kube-api-access-tqcsj") pod "413d9c76-cc03-4046-b8df-bd9c52633539" (UID: "413d9c76-cc03-4046-b8df-bd9c52633539"). InnerVolumeSpecName "kube-api-access-tqcsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:08:29 crc kubenswrapper[4745]: I0121 12:08:29.973084 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "413d9c76-cc03-4046-b8df-bd9c52633539" (UID: "413d9c76-cc03-4046-b8df-bd9c52633539"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.025342 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqcsj\" (UniqueName: \"kubernetes.io/projected/413d9c76-cc03-4046-b8df-bd9c52633539-kube-api-access-tqcsj\") on node \"crc\" DevicePath \"\"" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.025372 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.025383 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/413d9c76-cc03-4046-b8df-bd9c52633539-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.692491 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ppf2p" event={"ID":"413d9c76-cc03-4046-b8df-bd9c52633539","Type":"ContainerDied","Data":"a32cc9175e113ec141039dfc9da78e353fa4aa180e45397283a128faaceca896"} Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.692593 4745 scope.go:117] "RemoveContainer" containerID="9292b973879856dd7b8491ee5745969b46b9b1f7d16af5991bf3aab8c164df8b" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.694290 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ppf2p" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.740169 4745 scope.go:117] "RemoveContainer" containerID="b3320d243866d8b6b64c24e57cd3f4e8099cf714b8832c3d572345193fadba4d" Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.741598 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.756044 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ppf2p"] Jan 21 12:08:30 crc kubenswrapper[4745]: I0121 12:08:30.772382 4745 scope.go:117] "RemoveContainer" containerID="c950958c352d8a9c2abd9e0b7255cb6c127d36e2d217ad42db766c20f49b0691" Jan 21 12:08:32 crc kubenswrapper[4745]: I0121 12:08:32.011650 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" path="/var/lib/kubelet/pods/413d9c76-cc03-4046-b8df-bd9c52633539/volumes" Jan 21 12:08:41 crc kubenswrapper[4745]: I0121 12:08:41.000354 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:08:41 crc kubenswrapper[4745]: E0121 12:08:41.001099 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:08:52 crc kubenswrapper[4745]: I0121 12:08:52.000607 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:08:52 crc kubenswrapper[4745]: E0121 12:08:52.001405 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:09:07 crc kubenswrapper[4745]: I0121 12:09:07.000960 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:09:07 crc kubenswrapper[4745]: E0121 12:09:07.001734 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:09:19 crc kubenswrapper[4745]: I0121 12:09:19.000874 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:09:19 crc kubenswrapper[4745]: E0121 12:09:19.003024 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:09:33 crc kubenswrapper[4745]: I0121 12:09:33.000587 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:09:33 crc kubenswrapper[4745]: E0121 12:09:33.001426 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:09:46 crc kubenswrapper[4745]: I0121 12:09:46.020239 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:09:46 crc kubenswrapper[4745]: E0121 12:09:46.033119 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:09:58 crc kubenswrapper[4745]: I0121 12:09:58.000664 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:09:58 crc kubenswrapper[4745]: E0121 12:09:58.001434 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:10:11 crc kubenswrapper[4745]: I0121 12:10:11.001112 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:10:11 crc kubenswrapper[4745]: E0121 12:10:11.002010 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:10:23 crc kubenswrapper[4745]: I0121 12:10:23.000567 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:10:23 crc kubenswrapper[4745]: I0121 12:10:23.763352 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9"} Jan 21 12:11:47 crc kubenswrapper[4745]: I0121 12:11:47.671385 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-5f8f495fcf-dh2t4" podUID="dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:12:45 crc kubenswrapper[4745]: I0121 12:12:45.866728 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:12:45 crc kubenswrapper[4745]: I0121 12:12:45.867351 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:13:15 crc kubenswrapper[4745]: I0121 12:13:15.867031 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:13:15 crc kubenswrapper[4745]: I0121 12:13:15.867484 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:13:45 crc kubenswrapper[4745]: I0121 12:13:45.866996 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:13:45 crc kubenswrapper[4745]: I0121 12:13:45.868574 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:13:45 crc kubenswrapper[4745]: I0121 12:13:45.868709 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:13:45 crc kubenswrapper[4745]: I0121 12:13:45.869997 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:13:45 crc kubenswrapper[4745]: I0121 12:13:45.870146 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9" gracePeriod=600 Jan 21 12:13:46 crc kubenswrapper[4745]: I0121 12:13:46.761826 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9" exitCode=0 Jan 21 12:13:46 crc kubenswrapper[4745]: I0121 12:13:46.761895 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9"} Jan 21 12:13:46 crc kubenswrapper[4745]: I0121 12:13:46.762444 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92"} Jan 21 12:13:46 crc kubenswrapper[4745]: I0121 12:13:46.762473 4745 scope.go:117] "RemoveContainer" containerID="0c0919ecfad338a74cf115eb8d0eb2119729245e57ddcc4630eda0022cc170b1" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.158185 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp"] Jan 21 12:15:00 crc kubenswrapper[4745]: E0121 12:15:00.159163 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.159179 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="extract-content" Jan 21 12:15:00 crc kubenswrapper[4745]: E0121 12:15:00.159202 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.159208 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="extract-utilities" Jan 21 12:15:00 crc kubenswrapper[4745]: E0121 12:15:00.159222 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.159228 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.159450 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="413d9c76-cc03-4046-b8df-bd9c52633539" containerName="registry-server" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.160092 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.166961 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.166972 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.189651 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp"] Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.274185 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rf2\" (UniqueName: \"kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.274243 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.274310 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.375831 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4rf2\" (UniqueName: \"kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.375932 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.376037 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.377742 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.388482 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.395378 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4rf2\" (UniqueName: \"kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2\") pod \"collect-profiles-29483295-h9crp\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.488431 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:00 crc kubenswrapper[4745]: I0121 12:15:00.983975 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp"] Jan 21 12:15:01 crc kubenswrapper[4745]: I0121 12:15:01.427429 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8df3d29-5122-426f-9346-1b5f6b048bb0" containerID="6ea6b2b969257ca443bb37a1ad73500b3bcf40a01e6782c522a437214c614477" exitCode=0 Jan 21 12:15:01 crc kubenswrapper[4745]: I0121 12:15:01.427563 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" event={"ID":"a8df3d29-5122-426f-9346-1b5f6b048bb0","Type":"ContainerDied","Data":"6ea6b2b969257ca443bb37a1ad73500b3bcf40a01e6782c522a437214c614477"} Jan 21 12:15:01 crc kubenswrapper[4745]: I0121 12:15:01.427847 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" event={"ID":"a8df3d29-5122-426f-9346-1b5f6b048bb0","Type":"ContainerStarted","Data":"09893b6cf6a7c8b8f1591b90514235bd9e177bb9e4c9a02d725e97c9410ba520"} Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.745423 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.822470 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4rf2\" (UniqueName: \"kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2\") pod \"a8df3d29-5122-426f-9346-1b5f6b048bb0\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.822614 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume\") pod \"a8df3d29-5122-426f-9346-1b5f6b048bb0\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.822847 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume\") pod \"a8df3d29-5122-426f-9346-1b5f6b048bb0\" (UID: \"a8df3d29-5122-426f-9346-1b5f6b048bb0\") " Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.823597 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8df3d29-5122-426f-9346-1b5f6b048bb0" (UID: "a8df3d29-5122-426f-9346-1b5f6b048bb0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.834597 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a8df3d29-5122-426f-9346-1b5f6b048bb0" (UID: "a8df3d29-5122-426f-9346-1b5f6b048bb0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.834775 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2" (OuterVolumeSpecName: "kube-api-access-g4rf2") pod "a8df3d29-5122-426f-9346-1b5f6b048bb0" (UID: "a8df3d29-5122-426f-9346-1b5f6b048bb0"). InnerVolumeSpecName "kube-api-access-g4rf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.925119 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4rf2\" (UniqueName: \"kubernetes.io/projected/a8df3d29-5122-426f-9346-1b5f6b048bb0-kube-api-access-g4rf2\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.925198 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8df3d29-5122-426f-9346-1b5f6b048bb0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:02 crc kubenswrapper[4745]: I0121 12:15:02.925211 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a8df3d29-5122-426f-9346-1b5f6b048bb0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:15:03 crc kubenswrapper[4745]: I0121 12:15:03.443842 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" event={"ID":"a8df3d29-5122-426f-9346-1b5f6b048bb0","Type":"ContainerDied","Data":"09893b6cf6a7c8b8f1591b90514235bd9e177bb9e4c9a02d725e97c9410ba520"} Jan 21 12:15:03 crc kubenswrapper[4745]: I0121 12:15:03.444150 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09893b6cf6a7c8b8f1591b90514235bd9e177bb9e4c9a02d725e97c9410ba520" Jan 21 12:15:03 crc kubenswrapper[4745]: I0121 12:15:03.444215 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483295-h9crp" Jan 21 12:15:03 crc kubenswrapper[4745]: I0121 12:15:03.832634 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts"] Jan 21 12:15:03 crc kubenswrapper[4745]: I0121 12:15:03.842477 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483250-596ts"] Jan 21 12:15:04 crc kubenswrapper[4745]: I0121 12:15:04.012927 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67cd1a0a-60ad-4d9d-a498-b13cd535b86d" path="/var/lib/kubelet/pods/67cd1a0a-60ad-4d9d-a498-b13cd535b86d/volumes" Jan 21 12:15:58 crc kubenswrapper[4745]: I0121 12:15:58.505402 4745 scope.go:117] "RemoveContainer" containerID="49303b0c9710aca04e6c06e2a805f0423dd073ac3acb1407ede5cab0a5795e2d" Jan 21 12:16:15 crc kubenswrapper[4745]: I0121 12:16:15.866703 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:16:15 crc kubenswrapper[4745]: I0121 12:16:15.867908 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.127571 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:29 crc kubenswrapper[4745]: E0121 12:16:29.128679 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8df3d29-5122-426f-9346-1b5f6b048bb0" containerName="collect-profiles" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.128705 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8df3d29-5122-426f-9346-1b5f6b048bb0" containerName="collect-profiles" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.138906 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8df3d29-5122-426f-9346-1b5f6b048bb0" containerName="collect-profiles" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.140370 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.140459 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.314569 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb5js\" (UniqueName: \"kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.315130 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.315255 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.419147 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.419223 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qb5js\" (UniqueName: \"kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.419354 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.419822 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.420406 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.450998 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qb5js\" (UniqueName: \"kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js\") pod \"redhat-marketplace-t2qm5\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:29 crc kubenswrapper[4745]: I0121 12:16:29.469878 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:30 crc kubenswrapper[4745]: I0121 12:16:30.255221 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:31 crc kubenswrapper[4745]: I0121 12:16:31.245909 4745 generic.go:334] "Generic (PLEG): container finished" podID="da385f71-d737-46af-a752-0a68a14472a3" containerID="0d4e40d58aa649dd25e6a8f1c038b60f435f358f3855b4de2ddc6876744910aa" exitCode=0 Jan 21 12:16:31 crc kubenswrapper[4745]: I0121 12:16:31.246383 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerDied","Data":"0d4e40d58aa649dd25e6a8f1c038b60f435f358f3855b4de2ddc6876744910aa"} Jan 21 12:16:31 crc kubenswrapper[4745]: I0121 12:16:31.246417 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerStarted","Data":"9b243312ca02bf8f1fd55857c72c0cea191b5cc771d54f11c393e11be6014be2"} Jan 21 12:16:31 crc kubenswrapper[4745]: I0121 12:16:31.249238 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:16:32 crc kubenswrapper[4745]: I0121 12:16:32.257149 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerStarted","Data":"21f21417d6c7c06eb5f40c3427aec3d91df653e4a48d573d4660fb3644a46bcc"} Jan 21 12:16:33 crc kubenswrapper[4745]: I0121 12:16:33.272815 4745 generic.go:334] "Generic (PLEG): container finished" podID="da385f71-d737-46af-a752-0a68a14472a3" containerID="21f21417d6c7c06eb5f40c3427aec3d91df653e4a48d573d4660fb3644a46bcc" exitCode=0 Jan 21 12:16:33 crc kubenswrapper[4745]: I0121 12:16:33.272879 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerDied","Data":"21f21417d6c7c06eb5f40c3427aec3d91df653e4a48d573d4660fb3644a46bcc"} Jan 21 12:16:34 crc kubenswrapper[4745]: I0121 12:16:34.285207 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerStarted","Data":"52deb8fee833e0dd1c0fe378504be9db9b898982ceab471a759dfe630ce6c066"} Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.481162 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t2qm5" podStartSLOduration=5.042804608 podStartE2EDuration="7.481141829s" podCreationTimestamp="2026-01-21 12:16:29 +0000 UTC" firstStartedPulling="2026-01-21 12:16:31.248819092 +0000 UTC m=+5975.709606700" lastFinishedPulling="2026-01-21 12:16:33.687156323 +0000 UTC m=+5978.147943921" observedRunningTime="2026-01-21 12:16:34.311033293 +0000 UTC m=+5978.771820901" watchObservedRunningTime="2026-01-21 12:16:36.481141829 +0000 UTC m=+5980.941929427" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.490961 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.493646 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.509870 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.584054 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.584113 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfttf\" (UniqueName: \"kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.584378 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.686434 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.686733 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.686756 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfttf\" (UniqueName: \"kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.687133 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.687163 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.713689 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfttf\" (UniqueName: \"kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf\") pod \"redhat-operators-skfqx\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:36 crc kubenswrapper[4745]: I0121 12:16:36.824789 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:37 crc kubenswrapper[4745]: I0121 12:16:37.348291 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:16:38 crc kubenswrapper[4745]: I0121 12:16:38.319285 4745 generic.go:334] "Generic (PLEG): container finished" podID="e1e3fb65-d111-4537-b4ad-637387608095" containerID="52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f" exitCode=0 Jan 21 12:16:38 crc kubenswrapper[4745]: I0121 12:16:38.320984 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerDied","Data":"52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f"} Jan 21 12:16:38 crc kubenswrapper[4745]: I0121 12:16:38.321469 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerStarted","Data":"3f676d86c6f46d9cf257d4986afb60f2d48a245e6fce194a6d2e9dcc5d32b51a"} Jan 21 12:16:39 crc kubenswrapper[4745]: I0121 12:16:39.470132 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:39 crc kubenswrapper[4745]: I0121 12:16:39.470182 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:39 crc kubenswrapper[4745]: I0121 12:16:39.533756 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:40 crc kubenswrapper[4745]: I0121 12:16:40.396655 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.354464 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerStarted","Data":"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8"} Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.494479 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.498428 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.511476 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.587702 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.587751 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgdb6\" (UniqueName: \"kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.587941 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.689956 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.690008 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgdb6\" (UniqueName: \"kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.690137 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.690789 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.691082 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.714971 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgdb6\" (UniqueName: \"kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6\") pod \"certified-operators-r2f46\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:41 crc kubenswrapper[4745]: I0121 12:16:41.817656 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:42 crc kubenswrapper[4745]: I0121 12:16:42.099044 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:42 crc kubenswrapper[4745]: I0121 12:16:42.369576 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t2qm5" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="registry-server" containerID="cri-o://52deb8fee833e0dd1c0fe378504be9db9b898982ceab471a759dfe630ce6c066" gracePeriod=2 Jan 21 12:16:42 crc kubenswrapper[4745]: I0121 12:16:42.849798 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.379934 4745 generic.go:334] "Generic (PLEG): container finished" podID="da385f71-d737-46af-a752-0a68a14472a3" containerID="52deb8fee833e0dd1c0fe378504be9db9b898982ceab471a759dfe630ce6c066" exitCode=0 Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.380197 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerDied","Data":"52deb8fee833e0dd1c0fe378504be9db9b898982ceab471a759dfe630ce6c066"} Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.381496 4745 generic.go:334] "Generic (PLEG): container finished" podID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerID="eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2" exitCode=0 Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.381515 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerDied","Data":"eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2"} Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.381546 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerStarted","Data":"57de25c1d445cde010a30b94ce4c05fa80997accf890942392e4453ca88e12a3"} Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.468730 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.642308 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content\") pod \"da385f71-d737-46af-a752-0a68a14472a3\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.642690 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qb5js\" (UniqueName: \"kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js\") pod \"da385f71-d737-46af-a752-0a68a14472a3\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.642857 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities\") pod \"da385f71-d737-46af-a752-0a68a14472a3\" (UID: \"da385f71-d737-46af-a752-0a68a14472a3\") " Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.643225 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities" (OuterVolumeSpecName: "utilities") pod "da385f71-d737-46af-a752-0a68a14472a3" (UID: "da385f71-d737-46af-a752-0a68a14472a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.643617 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.656859 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js" (OuterVolumeSpecName: "kube-api-access-qb5js") pod "da385f71-d737-46af-a752-0a68a14472a3" (UID: "da385f71-d737-46af-a752-0a68a14472a3"). InnerVolumeSpecName "kube-api-access-qb5js". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.657510 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "da385f71-d737-46af-a752-0a68a14472a3" (UID: "da385f71-d737-46af-a752-0a68a14472a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.744406 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qb5js\" (UniqueName: \"kubernetes.io/projected/da385f71-d737-46af-a752-0a68a14472a3-kube-api-access-qb5js\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:43 crc kubenswrapper[4745]: I0121 12:16:43.744435 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da385f71-d737-46af-a752-0a68a14472a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.391678 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerStarted","Data":"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d"} Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.394968 4745 generic.go:334] "Generic (PLEG): container finished" podID="e1e3fb65-d111-4537-b4ad-637387608095" containerID="b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8" exitCode=0 Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.395012 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerDied","Data":"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8"} Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.398671 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t2qm5" event={"ID":"da385f71-d737-46af-a752-0a68a14472a3","Type":"ContainerDied","Data":"9b243312ca02bf8f1fd55857c72c0cea191b5cc771d54f11c393e11be6014be2"} Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.398723 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t2qm5" Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.398730 4745 scope.go:117] "RemoveContainer" containerID="52deb8fee833e0dd1c0fe378504be9db9b898982ceab471a759dfe630ce6c066" Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.430263 4745 scope.go:117] "RemoveContainer" containerID="21f21417d6c7c06eb5f40c3427aec3d91df653e4a48d573d4660fb3644a46bcc" Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.476662 4745 scope.go:117] "RemoveContainer" containerID="0d4e40d58aa649dd25e6a8f1c038b60f435f358f3855b4de2ddc6876744910aa" Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.510923 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:44 crc kubenswrapper[4745]: I0121 12:16:44.539312 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t2qm5"] Jan 21 12:16:45 crc kubenswrapper[4745]: I0121 12:16:45.867034 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:16:45 crc kubenswrapper[4745]: I0121 12:16:45.867314 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.012224 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da385f71-d737-46af-a752-0a68a14472a3" path="/var/lib/kubelet/pods/da385f71-d737-46af-a752-0a68a14472a3/volumes" Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.420988 4745 generic.go:334] "Generic (PLEG): container finished" podID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerID="9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d" exitCode=0 Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.421031 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerDied","Data":"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d"} Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.431518 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerStarted","Data":"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e"} Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.466382 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-skfqx" podStartSLOduration=3.409517175 podStartE2EDuration="10.466363749s" podCreationTimestamp="2026-01-21 12:16:36 +0000 UTC" firstStartedPulling="2026-01-21 12:16:38.321466705 +0000 UTC m=+5982.782254303" lastFinishedPulling="2026-01-21 12:16:45.378313289 +0000 UTC m=+5989.839100877" observedRunningTime="2026-01-21 12:16:46.46419838 +0000 UTC m=+5990.924985978" watchObservedRunningTime="2026-01-21 12:16:46.466363749 +0000 UTC m=+5990.927151347" Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.824991 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:46 crc kubenswrapper[4745]: I0121 12:16:46.826072 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:16:47 crc kubenswrapper[4745]: I0121 12:16:47.444591 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerStarted","Data":"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e"} Jan 21 12:16:47 crc kubenswrapper[4745]: I0121 12:16:47.469911 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r2f46" podStartSLOduration=3.022841956 podStartE2EDuration="6.469885367s" podCreationTimestamp="2026-01-21 12:16:41 +0000 UTC" firstStartedPulling="2026-01-21 12:16:43.388878032 +0000 UTC m=+5987.849665630" lastFinishedPulling="2026-01-21 12:16:46.835921443 +0000 UTC m=+5991.296709041" observedRunningTime="2026-01-21 12:16:47.464414308 +0000 UTC m=+5991.925201906" watchObservedRunningTime="2026-01-21 12:16:47.469885367 +0000 UTC m=+5991.930672975" Jan 21 12:16:47 crc kubenswrapper[4745]: I0121 12:16:47.876786 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skfqx" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" probeResult="failure" output=< Jan 21 12:16:47 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:16:47 crc kubenswrapper[4745]: > Jan 21 12:16:51 crc kubenswrapper[4745]: I0121 12:16:51.818361 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:51 crc kubenswrapper[4745]: I0121 12:16:51.818918 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:51 crc kubenswrapper[4745]: I0121 12:16:51.869189 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:52 crc kubenswrapper[4745]: I0121 12:16:52.558947 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:52 crc kubenswrapper[4745]: I0121 12:16:52.636397 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:54 crc kubenswrapper[4745]: I0121 12:16:54.516760 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r2f46" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="registry-server" containerID="cri-o://9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e" gracePeriod=2 Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.053634 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.188094 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgdb6\" (UniqueName: \"kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6\") pod \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.188162 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities\") pod \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.189211 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities" (OuterVolumeSpecName: "utilities") pod "c655fdcc-4a54-4269-b5c1-f2e7c5159408" (UID: "c655fdcc-4a54-4269-b5c1-f2e7c5159408"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.189355 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content\") pod \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\" (UID: \"c655fdcc-4a54-4269-b5c1-f2e7c5159408\") " Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.189893 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.194122 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6" (OuterVolumeSpecName: "kube-api-access-pgdb6") pod "c655fdcc-4a54-4269-b5c1-f2e7c5159408" (UID: "c655fdcc-4a54-4269-b5c1-f2e7c5159408"). InnerVolumeSpecName "kube-api-access-pgdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.241109 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c655fdcc-4a54-4269-b5c1-f2e7c5159408" (UID: "c655fdcc-4a54-4269-b5c1-f2e7c5159408"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.292017 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c655fdcc-4a54-4269-b5c1-f2e7c5159408-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.292070 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgdb6\" (UniqueName: \"kubernetes.io/projected/c655fdcc-4a54-4269-b5c1-f2e7c5159408-kube-api-access-pgdb6\") on node \"crc\" DevicePath \"\"" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.530429 4745 generic.go:334] "Generic (PLEG): container finished" podID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerID="9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e" exitCode=0 Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.530547 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerDied","Data":"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e"} Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.530791 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r2f46" event={"ID":"c655fdcc-4a54-4269-b5c1-f2e7c5159408","Type":"ContainerDied","Data":"57de25c1d445cde010a30b94ce4c05fa80997accf890942392e4453ca88e12a3"} Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.530819 4745 scope.go:117] "RemoveContainer" containerID="9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.530570 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r2f46" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.550586 4745 scope.go:117] "RemoveContainer" containerID="9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.592863 4745 scope.go:117] "RemoveContainer" containerID="eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.614481 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.634394 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r2f46"] Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.646675 4745 scope.go:117] "RemoveContainer" containerID="9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e" Jan 21 12:16:55 crc kubenswrapper[4745]: E0121 12:16:55.647216 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e\": container with ID starting with 9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e not found: ID does not exist" containerID="9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.647319 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e"} err="failed to get container status \"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e\": rpc error: code = NotFound desc = could not find container \"9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e\": container with ID starting with 9dcbacc6d41e898adc359b8f29d1a1559d3d4afb2effb7d2e4f5712fd1e3ab7e not found: ID does not exist" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.647411 4745 scope.go:117] "RemoveContainer" containerID="9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d" Jan 21 12:16:55 crc kubenswrapper[4745]: E0121 12:16:55.648014 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d\": container with ID starting with 9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d not found: ID does not exist" containerID="9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.648105 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d"} err="failed to get container status \"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d\": rpc error: code = NotFound desc = could not find container \"9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d\": container with ID starting with 9c40bd4664032a373a4b4f0efb00cc512877a5809a31415cbb1aaa180195354d not found: ID does not exist" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.648141 4745 scope.go:117] "RemoveContainer" containerID="eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2" Jan 21 12:16:55 crc kubenswrapper[4745]: E0121 12:16:55.648568 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2\": container with ID starting with eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2 not found: ID does not exist" containerID="eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2" Jan 21 12:16:55 crc kubenswrapper[4745]: I0121 12:16:55.648599 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2"} err="failed to get container status \"eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2\": rpc error: code = NotFound desc = could not find container \"eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2\": container with ID starting with eef07188475438b01a3001cfbedc4c7f2b91a43c5234d3797f7e42dc74bb26e2 not found: ID does not exist" Jan 21 12:16:56 crc kubenswrapper[4745]: I0121 12:16:56.012961 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" path="/var/lib/kubelet/pods/c655fdcc-4a54-4269-b5c1-f2e7c5159408/volumes" Jan 21 12:16:57 crc kubenswrapper[4745]: I0121 12:16:57.882330 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-skfqx" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" probeResult="failure" output=< Jan 21 12:16:57 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:16:57 crc kubenswrapper[4745]: > Jan 21 12:17:06 crc kubenswrapper[4745]: I0121 12:17:06.893434 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:17:06 crc kubenswrapper[4745]: I0121 12:17:06.965895 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:17:07 crc kubenswrapper[4745]: I0121 12:17:07.694741 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:17:08 crc kubenswrapper[4745]: I0121 12:17:08.655281 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-skfqx" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" containerID="cri-o://e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e" gracePeriod=2 Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.247728 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.277019 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content\") pod \"e1e3fb65-d111-4537-b4ad-637387608095\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.277166 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfttf\" (UniqueName: \"kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf\") pod \"e1e3fb65-d111-4537-b4ad-637387608095\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.277268 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities\") pod \"e1e3fb65-d111-4537-b4ad-637387608095\" (UID: \"e1e3fb65-d111-4537-b4ad-637387608095\") " Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.278888 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities" (OuterVolumeSpecName: "utilities") pod "e1e3fb65-d111-4537-b4ad-637387608095" (UID: "e1e3fb65-d111-4537-b4ad-637387608095"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.289037 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf" (OuterVolumeSpecName: "kube-api-access-vfttf") pod "e1e3fb65-d111-4537-b4ad-637387608095" (UID: "e1e3fb65-d111-4537-b4ad-637387608095"). InnerVolumeSpecName "kube-api-access-vfttf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.379573 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfttf\" (UniqueName: \"kubernetes.io/projected/e1e3fb65-d111-4537-b4ad-637387608095-kube-api-access-vfttf\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.379812 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.447183 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1e3fb65-d111-4537-b4ad-637387608095" (UID: "e1e3fb65-d111-4537-b4ad-637387608095"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.482018 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1e3fb65-d111-4537-b4ad-637387608095-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.669772 4745 generic.go:334] "Generic (PLEG): container finished" podID="e1e3fb65-d111-4537-b4ad-637387608095" containerID="e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e" exitCode=0 Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.669829 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerDied","Data":"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e"} Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.669833 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-skfqx" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.669888 4745 scope.go:117] "RemoveContainer" containerID="e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.669872 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-skfqx" event={"ID":"e1e3fb65-d111-4537-b4ad-637387608095","Type":"ContainerDied","Data":"3f676d86c6f46d9cf257d4986afb60f2d48a245e6fce194a6d2e9dcc5d32b51a"} Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.711827 4745 scope.go:117] "RemoveContainer" containerID="b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.712216 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.719121 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-skfqx"] Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.741037 4745 scope.go:117] "RemoveContainer" containerID="52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.789034 4745 scope.go:117] "RemoveContainer" containerID="e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e" Jan 21 12:17:09 crc kubenswrapper[4745]: E0121 12:17:09.789461 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e\": container with ID starting with e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e not found: ID does not exist" containerID="e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.789503 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e"} err="failed to get container status \"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e\": rpc error: code = NotFound desc = could not find container \"e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e\": container with ID starting with e7ffd9461fcad7b3c67f779ecdce9b558e5a6be4e337b0a9f2c1ce3effec7d6e not found: ID does not exist" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.789553 4745 scope.go:117] "RemoveContainer" containerID="b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8" Jan 21 12:17:09 crc kubenswrapper[4745]: E0121 12:17:09.789804 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8\": container with ID starting with b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8 not found: ID does not exist" containerID="b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.789840 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8"} err="failed to get container status \"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8\": rpc error: code = NotFound desc = could not find container \"b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8\": container with ID starting with b9961023cad280c7980ccfd832be2b8f2b4e9adee52e950ec2f2f25fc5f127a8 not found: ID does not exist" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.789861 4745 scope.go:117] "RemoveContainer" containerID="52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f" Jan 21 12:17:09 crc kubenswrapper[4745]: E0121 12:17:09.790348 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f\": container with ID starting with 52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f not found: ID does not exist" containerID="52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f" Jan 21 12:17:09 crc kubenswrapper[4745]: I0121 12:17:09.790378 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f"} err="failed to get container status \"52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f\": rpc error: code = NotFound desc = could not find container \"52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f\": container with ID starting with 52e761ea1262932fcf676a79db50b9d0fb633cde20a0269242dab6e388824d6f not found: ID does not exist" Jan 21 12:17:10 crc kubenswrapper[4745]: I0121 12:17:10.013227 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1e3fb65-d111-4537-b4ad-637387608095" path="/var/lib/kubelet/pods/e1e3fb65-d111-4537-b4ad-637387608095/volumes" Jan 21 12:17:15 crc kubenswrapper[4745]: I0121 12:17:15.866916 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:17:15 crc kubenswrapper[4745]: I0121 12:17:15.867452 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:17:15 crc kubenswrapper[4745]: I0121 12:17:15.867504 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:17:15 crc kubenswrapper[4745]: I0121 12:17:15.868703 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:17:15 crc kubenswrapper[4745]: I0121 12:17:15.868781 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" gracePeriod=600 Jan 21 12:17:16 crc kubenswrapper[4745]: E0121 12:17:16.011097 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:17:16 crc kubenswrapper[4745]: I0121 12:17:16.738032 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" exitCode=0 Jan 21 12:17:16 crc kubenswrapper[4745]: I0121 12:17:16.738082 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92"} Jan 21 12:17:16 crc kubenswrapper[4745]: I0121 12:17:16.738122 4745 scope.go:117] "RemoveContainer" containerID="68966aad646b72551463f9b571435eed240b8441ee88cf209d34ebaf51aaf3f9" Jan 21 12:17:16 crc kubenswrapper[4745]: I0121 12:17:16.739017 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:17:16 crc kubenswrapper[4745]: E0121 12:17:16.739440 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:17:30 crc kubenswrapper[4745]: I0121 12:17:30.000039 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:17:30 crc kubenswrapper[4745]: E0121 12:17:30.000896 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:17:44 crc kubenswrapper[4745]: I0121 12:17:44.000565 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:17:44 crc kubenswrapper[4745]: E0121 12:17:44.003681 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:17:58 crc kubenswrapper[4745]: I0121 12:17:58.000347 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:17:58 crc kubenswrapper[4745]: E0121 12:17:58.001213 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:18:12 crc kubenswrapper[4745]: I0121 12:18:12.000402 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:18:12 crc kubenswrapper[4745]: E0121 12:18:12.001197 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:18:27 crc kubenswrapper[4745]: I0121 12:18:27.000334 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:18:27 crc kubenswrapper[4745]: E0121 12:18:27.001308 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.035755 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.036808 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037204 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037227 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037235 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037258 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037282 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037294 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037301 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037310 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037318 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037331 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037338 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037347 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037353 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037364 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037371 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="extract-utilities" Jan 21 12:18:36 crc kubenswrapper[4745]: E0121 12:18:36.037382 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037388 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="extract-content" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.037975 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c655fdcc-4a54-4269-b5c1-f2e7c5159408" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.038005 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1e3fb65-d111-4537-b4ad-637387608095" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.038030 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="da385f71-d737-46af-a752-0a68a14472a3" containerName="registry-server" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.039795 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.052279 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.207482 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.207886 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.207944 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2clt\" (UniqueName: \"kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.310165 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2clt\" (UniqueName: \"kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.310312 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.310358 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.310895 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.311224 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.341552 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2clt\" (UniqueName: \"kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt\") pod \"community-operators-xlqzr\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.364571 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:36 crc kubenswrapper[4745]: I0121 12:18:36.857207 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:37 crc kubenswrapper[4745]: I0121 12:18:37.458564 4745 generic.go:334] "Generic (PLEG): container finished" podID="e936acf5-d01a-42b7-868b-965ac5372604" containerID="88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32" exitCode=0 Jan 21 12:18:37 crc kubenswrapper[4745]: I0121 12:18:37.458653 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerDied","Data":"88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32"} Jan 21 12:18:37 crc kubenswrapper[4745]: I0121 12:18:37.458881 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerStarted","Data":"aba828d326be6739ca2f132f3da094980a2bc84265917bda8920f771bc859bff"} Jan 21 12:18:38 crc kubenswrapper[4745]: I0121 12:18:38.000913 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:18:38 crc kubenswrapper[4745]: E0121 12:18:38.001466 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:18:38 crc kubenswrapper[4745]: I0121 12:18:38.472047 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerStarted","Data":"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9"} Jan 21 12:18:39 crc kubenswrapper[4745]: I0121 12:18:39.492585 4745 generic.go:334] "Generic (PLEG): container finished" podID="e936acf5-d01a-42b7-868b-965ac5372604" containerID="71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9" exitCode=0 Jan 21 12:18:39 crc kubenswrapper[4745]: I0121 12:18:39.492757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerDied","Data":"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9"} Jan 21 12:18:41 crc kubenswrapper[4745]: I0121 12:18:41.519616 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerStarted","Data":"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353"} Jan 21 12:18:41 crc kubenswrapper[4745]: I0121 12:18:41.549575 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xlqzr" podStartSLOduration=2.607543034 podStartE2EDuration="5.54951616s" podCreationTimestamp="2026-01-21 12:18:36 +0000 UTC" firstStartedPulling="2026-01-21 12:18:37.460291002 +0000 UTC m=+6101.921078600" lastFinishedPulling="2026-01-21 12:18:40.402264128 +0000 UTC m=+6104.863051726" observedRunningTime="2026-01-21 12:18:41.54106535 +0000 UTC m=+6106.001852948" watchObservedRunningTime="2026-01-21 12:18:41.54951616 +0000 UTC m=+6106.010303758" Jan 21 12:18:46 crc kubenswrapper[4745]: I0121 12:18:46.364973 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:46 crc kubenswrapper[4745]: I0121 12:18:46.365585 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:46 crc kubenswrapper[4745]: I0121 12:18:46.418897 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:46 crc kubenswrapper[4745]: I0121 12:18:46.629470 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:49 crc kubenswrapper[4745]: I0121 12:18:49.000296 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:18:49 crc kubenswrapper[4745]: E0121 12:18:49.001068 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:18:49 crc kubenswrapper[4745]: I0121 12:18:49.861172 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:49 crc kubenswrapper[4745]: I0121 12:18:49.861882 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xlqzr" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="registry-server" containerID="cri-o://170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353" gracePeriod=2 Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.336873 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.442936 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities\") pod \"e936acf5-d01a-42b7-868b-965ac5372604\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.443269 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2clt\" (UniqueName: \"kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt\") pod \"e936acf5-d01a-42b7-868b-965ac5372604\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.443358 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content\") pod \"e936acf5-d01a-42b7-868b-965ac5372604\" (UID: \"e936acf5-d01a-42b7-868b-965ac5372604\") " Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.443982 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities" (OuterVolumeSpecName: "utilities") pod "e936acf5-d01a-42b7-868b-965ac5372604" (UID: "e936acf5-d01a-42b7-868b-965ac5372604"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.449180 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt" (OuterVolumeSpecName: "kube-api-access-f2clt") pod "e936acf5-d01a-42b7-868b-965ac5372604" (UID: "e936acf5-d01a-42b7-868b-965ac5372604"). InnerVolumeSpecName "kube-api-access-f2clt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.497867 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e936acf5-d01a-42b7-868b-965ac5372604" (UID: "e936acf5-d01a-42b7-868b-965ac5372604"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.546217 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.546269 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2clt\" (UniqueName: \"kubernetes.io/projected/e936acf5-d01a-42b7-868b-965ac5372604-kube-api-access-f2clt\") on node \"crc\" DevicePath \"\"" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.546284 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e936acf5-d01a-42b7-868b-965ac5372604-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.612102 4745 generic.go:334] "Generic (PLEG): container finished" podID="e936acf5-d01a-42b7-868b-965ac5372604" containerID="170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353" exitCode=0 Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.612169 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerDied","Data":"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353"} Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.612183 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xlqzr" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.612218 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xlqzr" event={"ID":"e936acf5-d01a-42b7-868b-965ac5372604","Type":"ContainerDied","Data":"aba828d326be6739ca2f132f3da094980a2bc84265917bda8920f771bc859bff"} Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.612238 4745 scope.go:117] "RemoveContainer" containerID="170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.635318 4745 scope.go:117] "RemoveContainer" containerID="71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.654884 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.663888 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xlqzr"] Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.671150 4745 scope.go:117] "RemoveContainer" containerID="88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.709167 4745 scope.go:117] "RemoveContainer" containerID="170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353" Jan 21 12:18:50 crc kubenswrapper[4745]: E0121 12:18:50.710028 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353\": container with ID starting with 170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353 not found: ID does not exist" containerID="170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.710060 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353"} err="failed to get container status \"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353\": rpc error: code = NotFound desc = could not find container \"170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353\": container with ID starting with 170b3b14a9e3ca85bcb02bf0610d5704e0e81e212f6366c2aecc912a688fd353 not found: ID does not exist" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.710081 4745 scope.go:117] "RemoveContainer" containerID="71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9" Jan 21 12:18:50 crc kubenswrapper[4745]: E0121 12:18:50.710279 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9\": container with ID starting with 71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9 not found: ID does not exist" containerID="71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.710297 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9"} err="failed to get container status \"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9\": rpc error: code = NotFound desc = could not find container \"71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9\": container with ID starting with 71e30a7b2126bd08b831eca711a5f3d855135ef99e5255467dc780e479fac0a9 not found: ID does not exist" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.710309 4745 scope.go:117] "RemoveContainer" containerID="88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32" Jan 21 12:18:50 crc kubenswrapper[4745]: E0121 12:18:50.710522 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32\": container with ID starting with 88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32 not found: ID does not exist" containerID="88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32" Jan 21 12:18:50 crc kubenswrapper[4745]: I0121 12:18:50.710595 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32"} err="failed to get container status \"88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32\": rpc error: code = NotFound desc = could not find container \"88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32\": container with ID starting with 88167e056f6ac04dd7ef46b7a7023c039c100396ec3b504fb74794520c2d8d32 not found: ID does not exist" Jan 21 12:18:52 crc kubenswrapper[4745]: I0121 12:18:52.022646 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e936acf5-d01a-42b7-868b-965ac5372604" path="/var/lib/kubelet/pods/e936acf5-d01a-42b7-868b-965ac5372604/volumes" Jan 21 12:19:01 crc kubenswrapper[4745]: I0121 12:19:00.999893 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:19:01 crc kubenswrapper[4745]: E0121 12:19:01.000724 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:19:14 crc kubenswrapper[4745]: I0121 12:19:14.001019 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:19:14 crc kubenswrapper[4745]: E0121 12:19:14.002598 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:19:27 crc kubenswrapper[4745]: I0121 12:19:27.000505 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:19:27 crc kubenswrapper[4745]: E0121 12:19:27.001318 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:19:42 crc kubenswrapper[4745]: I0121 12:19:42.000928 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:19:42 crc kubenswrapper[4745]: E0121 12:19:42.001841 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:19:53 crc kubenswrapper[4745]: I0121 12:19:53.000600 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:19:53 crc kubenswrapper[4745]: E0121 12:19:53.001481 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:20:04 crc kubenswrapper[4745]: I0121 12:20:04.001170 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:20:04 crc kubenswrapper[4745]: E0121 12:20:04.001945 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:20:19 crc kubenswrapper[4745]: I0121 12:20:19.000565 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:20:19 crc kubenswrapper[4745]: E0121 12:20:19.001297 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.962527 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6c4c669957-bh9tq"] Jan 21 12:20:20 crc kubenswrapper[4745]: E0121 12:20:20.963019 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="extract-content" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.963031 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="extract-content" Jan 21 12:20:20 crc kubenswrapper[4745]: E0121 12:20:20.963041 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="registry-server" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.963046 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="registry-server" Jan 21 12:20:20 crc kubenswrapper[4745]: E0121 12:20:20.963077 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="extract-utilities" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.963084 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="extract-utilities" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.963276 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e936acf5-d01a-42b7-868b-965ac5372604" containerName="registry-server" Jan 21 12:20:20 crc kubenswrapper[4745]: I0121 12:20:20.964435 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.038839 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4c669957-bh9tq"] Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.161744 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.161902 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-ovndb-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.162004 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-internal-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.162045 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-combined-ca-bundle\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.162080 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bghhk\" (UniqueName: \"kubernetes.io/projected/6b270428-536d-4f65-b13a-e52446574239-kube-api-access-bghhk\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.162102 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-httpd-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.162184 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-public-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264346 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264414 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-ovndb-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264452 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-internal-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264470 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-combined-ca-bundle\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264508 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bghhk\" (UniqueName: \"kubernetes.io/projected/6b270428-536d-4f65-b13a-e52446574239-kube-api-access-bghhk\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264532 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-httpd-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.264579 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-public-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.272007 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-combined-ca-bundle\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.272098 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.272277 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-ovndb-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.273583 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-httpd-config\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.274427 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-internal-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.284613 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b270428-536d-4f65-b13a-e52446574239-public-tls-certs\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.288437 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bghhk\" (UniqueName: \"kubernetes.io/projected/6b270428-536d-4f65-b13a-e52446574239-kube-api-access-bghhk\") pod \"neutron-6c4c669957-bh9tq\" (UID: \"6b270428-536d-4f65-b13a-e52446574239\") " pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:21 crc kubenswrapper[4745]: I0121 12:20:21.581126 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:22 crc kubenswrapper[4745]: I0121 12:20:22.517236 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6c4c669957-bh9tq"] Jan 21 12:20:23 crc kubenswrapper[4745]: I0121 12:20:23.459773 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4c669957-bh9tq" event={"ID":"6b270428-536d-4f65-b13a-e52446574239","Type":"ContainerStarted","Data":"b751ad2784610221dc14134b113b622e01f7848ba8ce0faa766ae0c39c7d4df2"} Jan 21 12:20:23 crc kubenswrapper[4745]: I0121 12:20:23.460352 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:23 crc kubenswrapper[4745]: I0121 12:20:23.460367 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4c669957-bh9tq" event={"ID":"6b270428-536d-4f65-b13a-e52446574239","Type":"ContainerStarted","Data":"7b05591dc2263689cf38e98ee69e006f363c09ef15e4b84d11e5e2f3ae5a02ca"} Jan 21 12:20:23 crc kubenswrapper[4745]: I0121 12:20:23.460389 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6c4c669957-bh9tq" event={"ID":"6b270428-536d-4f65-b13a-e52446574239","Type":"ContainerStarted","Data":"69243a40f9ff1d3f9decfbc1e4abd8de38caabcd4e24bb7af8e787b42a10a0ed"} Jan 21 12:20:32 crc kubenswrapper[4745]: I0121 12:20:32.000282 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:20:32 crc kubenswrapper[4745]: E0121 12:20:32.001064 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:20:46 crc kubenswrapper[4745]: I0121 12:20:46.006165 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:20:46 crc kubenswrapper[4745]: E0121 12:20:46.006988 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:20:51 crc kubenswrapper[4745]: I0121 12:20:51.595478 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6c4c669957-bh9tq" Jan 21 12:20:51 crc kubenswrapper[4745]: I0121 12:20:51.615249 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6c4c669957-bh9tq" podStartSLOduration=31.615230331 podStartE2EDuration="31.615230331s" podCreationTimestamp="2026-01-21 12:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:20:23.498676998 +0000 UTC m=+6207.959464596" watchObservedRunningTime="2026-01-21 12:20:51.615230331 +0000 UTC m=+6236.076017919" Jan 21 12:20:51 crc kubenswrapper[4745]: I0121 12:20:51.664464 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 12:20:51 crc kubenswrapper[4745]: I0121 12:20:51.664732 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dd7dc574f-plxsl" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-api" containerID="cri-o://8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc" gracePeriod=30 Jan 21 12:20:51 crc kubenswrapper[4745]: I0121 12:20:51.665120 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-dd7dc574f-plxsl" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-httpd" containerID="cri-o://881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c" gracePeriod=30 Jan 21 12:20:53 crc kubenswrapper[4745]: I0121 12:20:53.749136 4745 generic.go:334] "Generic (PLEG): container finished" podID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerID="881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c" exitCode=0 Jan 21 12:20:53 crc kubenswrapper[4745]: I0121 12:20:53.749191 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerDied","Data":"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c"} Jan 21 12:21:00 crc kubenswrapper[4745]: I0121 12:21:00.000219 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:21:00 crc kubenswrapper[4745]: E0121 12:21:00.001048 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.411066 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434309 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434386 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434420 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434469 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434491 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434558 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8g49\" (UniqueName: \"kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.434583 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config\") pod \"c45d76bb-2a71-404e-b251-f62126f44bc7\" (UID: \"c45d76bb-2a71-404e-b251-f62126f44bc7\") " Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.451399 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49" (OuterVolumeSpecName: "kube-api-access-c8g49") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "kube-api-access-c8g49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.452430 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.500286 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.516724 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config" (OuterVolumeSpecName: "config") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.521781 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.529371 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535711 4745 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535738 4745 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535748 4745 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535757 4745 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-config\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535767 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8g49\" (UniqueName: \"kubernetes.io/projected/c45d76bb-2a71-404e-b251-f62126f44bc7-kube-api-access-c8g49\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.535777 4745 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.555292 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c45d76bb-2a71-404e-b251-f62126f44bc7" (UID: "c45d76bb-2a71-404e-b251-f62126f44bc7"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.637912 4745 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c45d76bb-2a71-404e-b251-f62126f44bc7-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.826160 4745 generic.go:334] "Generic (PLEG): container finished" podID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerID="8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc" exitCode=0 Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.826301 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerDied","Data":"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc"} Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.826589 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-dd7dc574f-plxsl" event={"ID":"c45d76bb-2a71-404e-b251-f62126f44bc7","Type":"ContainerDied","Data":"1e266ca71a6c96a7ab86acf0170d9dd168eef0dc55b00c273f16d32d051129c3"} Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.826377 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-dd7dc574f-plxsl" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.826611 4745 scope.go:117] "RemoveContainer" containerID="881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.860249 4745 scope.go:117] "RemoveContainer" containerID="8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.866292 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.876010 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-dd7dc574f-plxsl"] Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.886414 4745 scope.go:117] "RemoveContainer" containerID="881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c" Jan 21 12:21:01 crc kubenswrapper[4745]: E0121 12:21:01.893106 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c\": container with ID starting with 881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c not found: ID does not exist" containerID="881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.893145 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c"} err="failed to get container status \"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c\": rpc error: code = NotFound desc = could not find container \"881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c\": container with ID starting with 881f4f50621901cc8f4bb9f1cb15780c19c8914c90c23ca92b74fbabdc31199c not found: ID does not exist" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.893172 4745 scope.go:117] "RemoveContainer" containerID="8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc" Jan 21 12:21:01 crc kubenswrapper[4745]: E0121 12:21:01.893647 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc\": container with ID starting with 8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc not found: ID does not exist" containerID="8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc" Jan 21 12:21:01 crc kubenswrapper[4745]: I0121 12:21:01.893749 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc"} err="failed to get container status \"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc\": rpc error: code = NotFound desc = could not find container \"8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc\": container with ID starting with 8aeb39c71f40a73c1aa9a4bc0d912312173544982df801b1814e8ae1d8f198fc not found: ID does not exist" Jan 21 12:21:02 crc kubenswrapper[4745]: I0121 12:21:02.011085 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" path="/var/lib/kubelet/pods/c45d76bb-2a71-404e-b251-f62126f44bc7/volumes" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.486974 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" podUID="b28edf64-70dc-4fc2-8d7f-c1f141cbd31e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/healthz\": dial tcp 10.217.0.61:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.487722 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-g4gpj" podUID="b28edf64-70dc-4fc2-8d7f-c1f141cbd31e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": dial tcp 10.217.0.61:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.487801 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" podUID="42c37f0d-415a-4a72-ae98-07551477c6cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": dial tcp 10.217.0.79:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.487866 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-cb4666565-x9mpf" podUID="42c37f0d-415a-4a72-ae98-07551477c6cc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": dial tcp 10.217.0.79:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.487927 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" podUID="1be9da42-8db6-47b9-b7ec-788b04db264d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": dial tcp 10.217.0.47:7472: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.487979 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-6b7c494555-zdlbt" podUID="1be9da42-8db6-47b9-b7ec-788b04db264d" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": dial tcp 10.217.0.47:7472: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488042 4745 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-szgtz container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488068 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" podUID="f5752ba7-8465-4a19-b7a3-d2b4effe5f23" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488122 4745 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-szgtz container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488143 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-szgtz" podUID="f5752ba7-8465-4a19-b7a3-d2b4effe5f23" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488440 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" podUID="bc9be084-edd6-4556-88af-354f416d451c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": dial tcp 10.217.0.57:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488511 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-9f958b845-hw9zg" podUID="bc9be084-edd6-4556-88af-354f416d451c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/healthz\": dial tcp 10.217.0.57:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488603 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5ft4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: i/o timeout" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488632 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" podUID="f9c06282-abf7-4d46-90df-6d48394448cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: i/o timeout" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488687 4745 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n5ft4 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: i/o timeout" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488709 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n5ft4" podUID="f9c06282-abf7-4d46-90df-6d48394448cf" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: i/o timeout" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.489981 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" podUID="9ff19137-02fd-4de1-9601-95a5c0fbbed0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/readyz\": dial tcp 10.217.0.60:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.490072 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" podUID="fb04ba1c-d6a0-40aa-b985-f4715cb11257" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/healthz\": dial tcp 10.217.0.64:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.490130 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-767fdc4f47-fh7ts" podUID="fb04ba1c-d6a0-40aa-b985-f4715cb11257" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.64:8081/readyz\": dial tcp 10.217.0.64:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.490494 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/readyz\": dial tcp 10.217.0.65:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.490586 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/healthz\": dial tcp 10.217.0.65:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491242 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" podUID="dfb1f262-fe24-45bf-8f75-0e2a81989f3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": dial tcp 10.217.0.74:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491325 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-864f6b75bf-dvhql" podUID="dfb1f262-fe24-45bf-8f75-0e2a81989f3f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": dial tcp 10.217.0.74:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491387 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw9m4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491412 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" podUID="9f884d1f-fcd5-4179-9350-6b41b3d136b7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491545 4745 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lw9m4 container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491573 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lw9m4" podUID="9f884d1f-fcd5-4179-9350-6b41b3d136b7" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: i/o timeout" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491645 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podUID="c0985a55-6ede-4214-87fe-27cb5668dd86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/healthz\": dial tcp 10.217.0.78:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.491708 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-c87fff755-8xm9d" podUID="c0985a55-6ede-4214-87fe-27cb5668dd86" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": dial tcp 10.217.0.78:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492200 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" podUID="8381ff45-ae46-437a-894e-1530d39397f8" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/readyz\": dial tcp 10.217.0.53:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492252 4745 patch_prober.go:28] interesting pod/router-default-5444994796-tk5j9 container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492271 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-tk5j9" podUID="5c53e15f-0e61-49a2-bb11-8b39af387be9" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492335 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-777994b6d8-xpq4v" podUID="8381ff45-ae46-437a-894e-1530d39397f8" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.53:8081/healthz\": dial tcp 10.217.0.53:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492401 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podUID="a96f3189-7bbc-404d-ad6d-05b8fefb65fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": dial tcp 10.217.0.80:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492475 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-7fc9b76cf6-bx656" podUID="a96f3189-7bbc-404d-ad6d-05b8fefb65fc" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/healthz\": dial tcp 10.217.0.80:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492658 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" podUID="be658ac1-07b6-482b-8b99-35a75fcf3b50" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": dial tcp 10.217.0.81:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492736 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-65849867d6-g8j7m" podUID="be658ac1-07b6-482b-8b99-35a75fcf3b50" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": dial tcp 10.217.0.81:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492803 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-9f9vp" podUID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492869 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-9f9vp" podUID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.492934 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-9f9vp" podUID="db2f79cd-c6c7-459f-bf98-002583ba5ddd" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": dial tcp 127.0.0.1:7573: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:07 crc kubenswrapper[4745]: I0121 12:21:07.488775 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-c6994669c-gntws" podUID="9ff19137-02fd-4de1-9601-95a5c0fbbed0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.60:8081/healthz\": dial tcp 10.217.0.60:8081: i/o timeout (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:15 crc kubenswrapper[4745]: I0121 12:21:14.999807 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:21:15 crc kubenswrapper[4745]: E0121 12:21:15.000522 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:21:26 crc kubenswrapper[4745]: I0121 12:21:26.012448 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:21:26 crc kubenswrapper[4745]: E0121 12:21:26.014723 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:21:26 crc kubenswrapper[4745]: I0121 12:21:26.651857 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-78757b4889-clbcs" podUID="2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.65:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:27 crc kubenswrapper[4745]: I0121 12:21:27.136754 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-lgq6w" podUID="ad7637e4-fd78-447b-98ea-20af5f3c5c2a" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.50:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 21 12:21:38 crc kubenswrapper[4745]: I0121 12:21:38.000892 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:21:38 crc kubenswrapper[4745]: E0121 12:21:38.001787 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:21:52 crc kubenswrapper[4745]: I0121 12:21:52.000342 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:21:52 crc kubenswrapper[4745]: E0121 12:21:52.001026 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:22:05 crc kubenswrapper[4745]: I0121 12:22:05.000582 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:22:05 crc kubenswrapper[4745]: E0121 12:22:05.001411 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:22:20 crc kubenswrapper[4745]: I0121 12:22:20.002586 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:22:20 crc kubenswrapper[4745]: I0121 12:22:20.326043 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45"} Jan 21 12:22:54 crc kubenswrapper[4745]: E0121 12:22:54.733559 4745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.78:41858->38.129.56.78:36213: write tcp 38.129.56.78:41858->38.129.56.78:36213: write: connection reset by peer Jan 21 12:24:45 crc kubenswrapper[4745]: I0121 12:24:45.867170 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:24:45 crc kubenswrapper[4745]: I0121 12:24:45.867727 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:25:15 crc kubenswrapper[4745]: I0121 12:25:15.867207 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:25:15 crc kubenswrapper[4745]: I0121 12:25:15.867856 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:25:45 crc kubenswrapper[4745]: I0121 12:25:45.866599 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:25:45 crc kubenswrapper[4745]: I0121 12:25:45.867193 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:25:45 crc kubenswrapper[4745]: I0121 12:25:45.867260 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:25:45 crc kubenswrapper[4745]: I0121 12:25:45.868336 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:25:45 crc kubenswrapper[4745]: I0121 12:25:45.868406 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45" gracePeriod=600 Jan 21 12:25:46 crc kubenswrapper[4745]: I0121 12:25:46.254771 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45" exitCode=0 Jan 21 12:25:46 crc kubenswrapper[4745]: I0121 12:25:46.254850 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45"} Jan 21 12:25:46 crc kubenswrapper[4745]: I0121 12:25:46.256350 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04"} Jan 21 12:25:46 crc kubenswrapper[4745]: I0121 12:25:46.256378 4745 scope.go:117] "RemoveContainer" containerID="92bb8804874f427321148a7363fafeda0e1e13c595e2c98e51e1af9918437e92" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.292661 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:26:43 crc kubenswrapper[4745]: E0121 12:26:43.293515 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-httpd" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.297003 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-httpd" Jan 21 12:26:43 crc kubenswrapper[4745]: E0121 12:26:43.297053 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-api" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.297062 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-api" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.297355 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-api" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.297374 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c45d76bb-2a71-404e-b251-f62126f44bc7" containerName="neutron-httpd" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.302911 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.307610 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.434155 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8b2k\" (UniqueName: \"kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.434498 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.434558 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.536689 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.536745 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.536797 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8b2k\" (UniqueName: \"kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.537821 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.539225 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.569765 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8b2k\" (UniqueName: \"kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k\") pod \"certified-operators-ckxtf\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:43 crc kubenswrapper[4745]: I0121 12:26:43.632243 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:44 crc kubenswrapper[4745]: I0121 12:26:44.674953 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:26:44 crc kubenswrapper[4745]: W0121 12:26:44.706219 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ff01e87_9fc9_4936_9800_979037f6a8c0.slice/crio-8d291693c638dc011833e5de8ff724a5c0b1ac0dafb85a928097a7de56cea590 WatchSource:0}: Error finding container 8d291693c638dc011833e5de8ff724a5c0b1ac0dafb85a928097a7de56cea590: Status 404 returned error can't find the container with id 8d291693c638dc011833e5de8ff724a5c0b1ac0dafb85a928097a7de56cea590 Jan 21 12:26:44 crc kubenswrapper[4745]: I0121 12:26:44.843272 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerStarted","Data":"8d291693c638dc011833e5de8ff724a5c0b1ac0dafb85a928097a7de56cea590"} Jan 21 12:26:45 crc kubenswrapper[4745]: I0121 12:26:45.869936 4745 generic.go:334] "Generic (PLEG): container finished" podID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerID="3fd3e46f7bb47ab5bfbad1dcbf91aa0b0244627aeed15dab6565908633497662" exitCode=0 Jan 21 12:26:45 crc kubenswrapper[4745]: I0121 12:26:45.870215 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerDied","Data":"3fd3e46f7bb47ab5bfbad1dcbf91aa0b0244627aeed15dab6565908633497662"} Jan 21 12:26:45 crc kubenswrapper[4745]: I0121 12:26:45.881849 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:26:46 crc kubenswrapper[4745]: I0121 12:26:46.884659 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerStarted","Data":"9ecd3c58b53180703f4f9680bffe362e65ed3fe638e9155f48da48d61362c747"} Jan 21 12:26:49 crc kubenswrapper[4745]: I0121 12:26:49.916448 4745 generic.go:334] "Generic (PLEG): container finished" podID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerID="9ecd3c58b53180703f4f9680bffe362e65ed3fe638e9155f48da48d61362c747" exitCode=0 Jan 21 12:26:49 crc kubenswrapper[4745]: I0121 12:26:49.916492 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerDied","Data":"9ecd3c58b53180703f4f9680bffe362e65ed3fe638e9155f48da48d61362c747"} Jan 21 12:26:50 crc kubenswrapper[4745]: I0121 12:26:50.927567 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerStarted","Data":"10992c00de8b90ce90ac9112cd9f857184bd335638e856a03bb14b1a71066f38"} Jan 21 12:26:50 crc kubenswrapper[4745]: I0121 12:26:50.952892 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ckxtf" podStartSLOduration=3.399721526 podStartE2EDuration="7.95287238s" podCreationTimestamp="2026-01-21 12:26:43 +0000 UTC" firstStartedPulling="2026-01-21 12:26:45.874327948 +0000 UTC m=+6590.335115546" lastFinishedPulling="2026-01-21 12:26:50.427478802 +0000 UTC m=+6594.888266400" observedRunningTime="2026-01-21 12:26:50.952867339 +0000 UTC m=+6595.413654937" watchObservedRunningTime="2026-01-21 12:26:50.95287238 +0000 UTC m=+6595.413659978" Jan 21 12:26:53 crc kubenswrapper[4745]: I0121 12:26:53.633854 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:53 crc kubenswrapper[4745]: I0121 12:26:53.634183 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:26:54 crc kubenswrapper[4745]: I0121 12:26:54.687029 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-ckxtf" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="registry-server" probeResult="failure" output=< Jan 21 12:26:54 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:26:54 crc kubenswrapper[4745]: > Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.186893 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.190020 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.276923 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.276927 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.277183 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.277283 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whkcw\" (UniqueName: \"kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.379119 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.379309 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whkcw\" (UniqueName: \"kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.379399 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.379887 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.380836 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.401898 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whkcw\" (UniqueName: \"kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw\") pod \"redhat-marketplace-bwf9k\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:55 crc kubenswrapper[4745]: I0121 12:26:55.564140 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:26:56 crc kubenswrapper[4745]: I0121 12:26:56.229155 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:26:56 crc kubenswrapper[4745]: E0121 12:26:56.714279 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b44e2ef_fff0_4999_b0e1_9598295d6dec.slice/crio-6a128b35e33c4b4537b85882c8ed6fc6594f582411321d333456d2a8e0bae360.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b44e2ef_fff0_4999_b0e1_9598295d6dec.slice/crio-conmon-6a128b35e33c4b4537b85882c8ed6fc6594f582411321d333456d2a8e0bae360.scope\": RecentStats: unable to find data in memory cache]" Jan 21 12:26:56 crc kubenswrapper[4745]: I0121 12:26:56.994291 4745 generic.go:334] "Generic (PLEG): container finished" podID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerID="6a128b35e33c4b4537b85882c8ed6fc6594f582411321d333456d2a8e0bae360" exitCode=0 Jan 21 12:26:56 crc kubenswrapper[4745]: I0121 12:26:56.994377 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerDied","Data":"6a128b35e33c4b4537b85882c8ed6fc6594f582411321d333456d2a8e0bae360"} Jan 21 12:26:56 crc kubenswrapper[4745]: I0121 12:26:56.994433 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerStarted","Data":"fd4b2c6f5d78863f39f1f03464320587a64bef117e09645e7bcf3846e59f42d1"} Jan 21 12:27:03 crc kubenswrapper[4745]: I0121 12:27:03.076485 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerStarted","Data":"cdf0cf79f43e14a6b8968eaa7b9a8ba51e30c35b84031d3d4253fd480f83a6fb"} Jan 21 12:27:03 crc kubenswrapper[4745]: I0121 12:27:03.685515 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:27:03 crc kubenswrapper[4745]: I0121 12:27:03.740930 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:27:04 crc kubenswrapper[4745]: I0121 12:27:04.088666 4745 generic.go:334] "Generic (PLEG): container finished" podID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerID="cdf0cf79f43e14a6b8968eaa7b9a8ba51e30c35b84031d3d4253fd480f83a6fb" exitCode=0 Jan 21 12:27:04 crc kubenswrapper[4745]: I0121 12:27:04.088758 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerDied","Data":"cdf0cf79f43e14a6b8968eaa7b9a8ba51e30c35b84031d3d4253fd480f83a6fb"} Jan 21 12:27:04 crc kubenswrapper[4745]: I0121 12:27:04.316457 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:27:05 crc kubenswrapper[4745]: I0121 12:27:05.098893 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ckxtf" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="registry-server" containerID="cri-o://10992c00de8b90ce90ac9112cd9f857184bd335638e856a03bb14b1a71066f38" gracePeriod=2 Jan 21 12:27:05 crc kubenswrapper[4745]: I0121 12:27:05.101706 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerStarted","Data":"28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe"} Jan 21 12:27:05 crc kubenswrapper[4745]: I0121 12:27:05.125356 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bwf9k" podStartSLOduration=2.274312134 podStartE2EDuration="10.125335716s" podCreationTimestamp="2026-01-21 12:26:55 +0000 UTC" firstStartedPulling="2026-01-21 12:26:56.997208071 +0000 UTC m=+6601.457995669" lastFinishedPulling="2026-01-21 12:27:04.848231653 +0000 UTC m=+6609.309019251" observedRunningTime="2026-01-21 12:27:05.12334177 +0000 UTC m=+6609.584129388" watchObservedRunningTime="2026-01-21 12:27:05.125335716 +0000 UTC m=+6609.586123314" Jan 21 12:27:05 crc kubenswrapper[4745]: I0121 12:27:05.564436 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:05 crc kubenswrapper[4745]: I0121 12:27:05.564903 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:06 crc kubenswrapper[4745]: I0121 12:27:06.119716 4745 generic.go:334] "Generic (PLEG): container finished" podID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerID="10992c00de8b90ce90ac9112cd9f857184bd335638e856a03bb14b1a71066f38" exitCode=0 Jan 21 12:27:06 crc kubenswrapper[4745]: I0121 12:27:06.119799 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerDied","Data":"10992c00de8b90ce90ac9112cd9f857184bd335638e856a03bb14b1a71066f38"} Jan 21 12:27:06 crc kubenswrapper[4745]: I0121 12:27:06.664181 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-bwf9k" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="registry-server" probeResult="failure" output=< Jan 21 12:27:06 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:27:06 crc kubenswrapper[4745]: > Jan 21 12:27:06 crc kubenswrapper[4745]: I0121 12:27:06.984418 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.132829 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckxtf" event={"ID":"9ff01e87-9fc9-4936-9800-979037f6a8c0","Type":"ContainerDied","Data":"8d291693c638dc011833e5de8ff724a5c0b1ac0dafb85a928097a7de56cea590"} Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.132874 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckxtf" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.132900 4745 scope.go:117] "RemoveContainer" containerID="10992c00de8b90ce90ac9112cd9f857184bd335638e856a03bb14b1a71066f38" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.157947 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8b2k\" (UniqueName: \"kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k\") pod \"9ff01e87-9fc9-4936-9800-979037f6a8c0\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.158304 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities\") pod \"9ff01e87-9fc9-4936-9800-979037f6a8c0\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.158453 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content\") pod \"9ff01e87-9fc9-4936-9800-979037f6a8c0\" (UID: \"9ff01e87-9fc9-4936-9800-979037f6a8c0\") " Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.158884 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities" (OuterVolumeSpecName: "utilities") pod "9ff01e87-9fc9-4936-9800-979037f6a8c0" (UID: "9ff01e87-9fc9-4936-9800-979037f6a8c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.160699 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.162300 4745 scope.go:117] "RemoveContainer" containerID="9ecd3c58b53180703f4f9680bffe362e65ed3fe638e9155f48da48d61362c747" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.167957 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k" (OuterVolumeSpecName: "kube-api-access-z8b2k") pod "9ff01e87-9fc9-4936-9800-979037f6a8c0" (UID: "9ff01e87-9fc9-4936-9800-979037f6a8c0"). InnerVolumeSpecName "kube-api-access-z8b2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.228573 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9ff01e87-9fc9-4936-9800-979037f6a8c0" (UID: "9ff01e87-9fc9-4936-9800-979037f6a8c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.256999 4745 scope.go:117] "RemoveContainer" containerID="3fd3e46f7bb47ab5bfbad1dcbf91aa0b0244627aeed15dab6565908633497662" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.264298 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8b2k\" (UniqueName: \"kubernetes.io/projected/9ff01e87-9fc9-4936-9800-979037f6a8c0-kube-api-access-z8b2k\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.264369 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9ff01e87-9fc9-4936-9800-979037f6a8c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.463100 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:27:07 crc kubenswrapper[4745]: I0121 12:27:07.474848 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ckxtf"] Jan 21 12:27:08 crc kubenswrapper[4745]: I0121 12:27:08.015333 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" path="/var/lib/kubelet/pods/9ff01e87-9fc9-4936-9800-979037f6a8c0/volumes" Jan 21 12:27:15 crc kubenswrapper[4745]: I0121 12:27:15.651431 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:15 crc kubenswrapper[4745]: I0121 12:27:15.746170 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:15 crc kubenswrapper[4745]: I0121 12:27:15.918353 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:27:17 crc kubenswrapper[4745]: I0121 12:27:17.217854 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bwf9k" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="registry-server" containerID="cri-o://28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe" gracePeriod=2 Jan 21 12:27:17 crc kubenswrapper[4745]: E0121 12:27:17.315027 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4b44e2ef_fff0_4999_b0e1_9598295d6dec.slice/crio-28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe.scope\": RecentStats: unable to find data in memory cache]" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.239642 4745 generic.go:334] "Generic (PLEG): container finished" podID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerID="28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe" exitCode=0 Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.239966 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerDied","Data":"28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe"} Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.458517 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.587273 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content\") pod \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.587409 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whkcw\" (UniqueName: \"kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw\") pod \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.587546 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities\") pod \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\" (UID: \"4b44e2ef-fff0-4999-b0e1-9598295d6dec\") " Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.588714 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities" (OuterVolumeSpecName: "utilities") pod "4b44e2ef-fff0-4999-b0e1-9598295d6dec" (UID: "4b44e2ef-fff0-4999-b0e1-9598295d6dec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.594660 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw" (OuterVolumeSpecName: "kube-api-access-whkcw") pod "4b44e2ef-fff0-4999-b0e1-9598295d6dec" (UID: "4b44e2ef-fff0-4999-b0e1-9598295d6dec"). InnerVolumeSpecName "kube-api-access-whkcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.611711 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4b44e2ef-fff0-4999-b0e1-9598295d6dec" (UID: "4b44e2ef-fff0-4999-b0e1-9598295d6dec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.689736 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.689771 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whkcw\" (UniqueName: \"kubernetes.io/projected/4b44e2ef-fff0-4999-b0e1-9598295d6dec-kube-api-access-whkcw\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:18 crc kubenswrapper[4745]: I0121 12:27:18.689781 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4b44e2ef-fff0-4999-b0e1-9598295d6dec-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.252496 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bwf9k" event={"ID":"4b44e2ef-fff0-4999-b0e1-9598295d6dec","Type":"ContainerDied","Data":"fd4b2c6f5d78863f39f1f03464320587a64bef117e09645e7bcf3846e59f42d1"} Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.252600 4745 scope.go:117] "RemoveContainer" containerID="28f300342f7d97ff0bddde49ae740ced5435d602971bf206a221dd846fe510fe" Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.252758 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bwf9k" Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.281918 4745 scope.go:117] "RemoveContainer" containerID="cdf0cf79f43e14a6b8968eaa7b9a8ba51e30c35b84031d3d4253fd480f83a6fb" Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.294082 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.305085 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bwf9k"] Jan 21 12:27:19 crc kubenswrapper[4745]: I0121 12:27:19.322196 4745 scope.go:117] "RemoveContainer" containerID="6a128b35e33c4b4537b85882c8ed6fc6594f582411321d333456d2a8e0bae360" Jan 21 12:27:20 crc kubenswrapper[4745]: I0121 12:27:20.011795 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" path="/var/lib/kubelet/pods/4b44e2ef-fff0-4999-b0e1-9598295d6dec/volumes" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.031460 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032382 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="extract-content" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032396 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="extract-content" Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032412 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="extract-utilities" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032420 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="extract-utilities" Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032436 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="extract-content" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032444 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="extract-content" Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032459 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032464 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032484 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="extract-utilities" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032490 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="extract-utilities" Jan 21 12:27:36 crc kubenswrapper[4745]: E0121 12:27:36.032506 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032511 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032694 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ff01e87-9fc9-4936-9800-979037f6a8c0" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.032709 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b44e2ef-fff0-4999-b0e1-9598295d6dec" containerName="registry-server" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.034267 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.050388 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.087537 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.087635 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2558\" (UniqueName: \"kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.087724 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.189307 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2558\" (UniqueName: \"kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.189423 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.189486 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.190006 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.190279 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.228186 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2558\" (UniqueName: \"kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558\") pod \"redhat-operators-d8qhr\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.376520 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:36 crc kubenswrapper[4745]: I0121 12:27:36.937003 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:27:37 crc kubenswrapper[4745]: I0121 12:27:37.466284 4745 generic.go:334] "Generic (PLEG): container finished" podID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerID="a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1" exitCode=0 Jan 21 12:27:37 crc kubenswrapper[4745]: I0121 12:27:37.466324 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerDied","Data":"a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1"} Jan 21 12:27:37 crc kubenswrapper[4745]: I0121 12:27:37.466573 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerStarted","Data":"2cd35560f487e7977dcb11f050654304f4cf93873f6f8f251495bc641be1fa26"} Jan 21 12:27:39 crc kubenswrapper[4745]: I0121 12:27:39.496475 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerStarted","Data":"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e"} Jan 21 12:27:43 crc kubenswrapper[4745]: I0121 12:27:43.532503 4745 generic.go:334] "Generic (PLEG): container finished" podID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerID="4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e" exitCode=0 Jan 21 12:27:43 crc kubenswrapper[4745]: I0121 12:27:43.532561 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerDied","Data":"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e"} Jan 21 12:27:44 crc kubenswrapper[4745]: I0121 12:27:44.542611 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerStarted","Data":"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952"} Jan 21 12:27:44 crc kubenswrapper[4745]: I0121 12:27:44.572771 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d8qhr" podStartSLOduration=1.879536619 podStartE2EDuration="8.572732736s" podCreationTimestamp="2026-01-21 12:27:36 +0000 UTC" firstStartedPulling="2026-01-21 12:27:37.468330195 +0000 UTC m=+6641.929117793" lastFinishedPulling="2026-01-21 12:27:44.161526312 +0000 UTC m=+6648.622313910" observedRunningTime="2026-01-21 12:27:44.559422429 +0000 UTC m=+6649.020210027" watchObservedRunningTime="2026-01-21 12:27:44.572732736 +0000 UTC m=+6649.033520334" Jan 21 12:27:46 crc kubenswrapper[4745]: I0121 12:27:46.377181 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:46 crc kubenswrapper[4745]: I0121 12:27:46.377659 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:27:47 crc kubenswrapper[4745]: I0121 12:27:47.432006 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d8qhr" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" probeResult="failure" output=< Jan 21 12:27:47 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:27:47 crc kubenswrapper[4745]: > Jan 21 12:27:57 crc kubenswrapper[4745]: I0121 12:27:57.448306 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d8qhr" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" probeResult="failure" output=< Jan 21 12:27:57 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:27:57 crc kubenswrapper[4745]: > Jan 21 12:27:58 crc kubenswrapper[4745]: E0121 12:27:58.325677 4745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.78:51374->38.129.56.78:36213: write tcp 38.129.56.78:51374->38.129.56.78:36213: write: connection reset by peer Jan 21 12:28:06 crc kubenswrapper[4745]: I0121 12:28:06.439804 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:28:06 crc kubenswrapper[4745]: I0121 12:28:06.489310 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:28:07 crc kubenswrapper[4745]: I0121 12:28:07.238435 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:28:07 crc kubenswrapper[4745]: I0121 12:28:07.739973 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d8qhr" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" containerID="cri-o://ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952" gracePeriod=2 Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.211164 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.359889 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities\") pod \"40ffad66-a21f-4ef6-a842-22061a4eebe3\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.360110 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2558\" (UniqueName: \"kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558\") pod \"40ffad66-a21f-4ef6-a842-22061a4eebe3\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.360191 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content\") pod \"40ffad66-a21f-4ef6-a842-22061a4eebe3\" (UID: \"40ffad66-a21f-4ef6-a842-22061a4eebe3\") " Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.360975 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities" (OuterVolumeSpecName: "utilities") pod "40ffad66-a21f-4ef6-a842-22061a4eebe3" (UID: "40ffad66-a21f-4ef6-a842-22061a4eebe3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.367735 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558" (OuterVolumeSpecName: "kube-api-access-d2558") pod "40ffad66-a21f-4ef6-a842-22061a4eebe3" (UID: "40ffad66-a21f-4ef6-a842-22061a4eebe3"). InnerVolumeSpecName "kube-api-access-d2558". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.462127 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.462162 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2558\" (UniqueName: \"kubernetes.io/projected/40ffad66-a21f-4ef6-a842-22061a4eebe3-kube-api-access-d2558\") on node \"crc\" DevicePath \"\"" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.474410 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40ffad66-a21f-4ef6-a842-22061a4eebe3" (UID: "40ffad66-a21f-4ef6-a842-22061a4eebe3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.563575 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40ffad66-a21f-4ef6-a842-22061a4eebe3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.750757 4745 generic.go:334] "Generic (PLEG): container finished" podID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerID="ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952" exitCode=0 Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.750841 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d8qhr" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.750845 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerDied","Data":"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952"} Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.752237 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d8qhr" event={"ID":"40ffad66-a21f-4ef6-a842-22061a4eebe3","Type":"ContainerDied","Data":"2cd35560f487e7977dcb11f050654304f4cf93873f6f8f251495bc641be1fa26"} Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.752263 4745 scope.go:117] "RemoveContainer" containerID="ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.781027 4745 scope.go:117] "RemoveContainer" containerID="4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.795478 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.808272 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d8qhr"] Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.821042 4745 scope.go:117] "RemoveContainer" containerID="a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.861762 4745 scope.go:117] "RemoveContainer" containerID="ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952" Jan 21 12:28:08 crc kubenswrapper[4745]: E0121 12:28:08.863160 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952\": container with ID starting with ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952 not found: ID does not exist" containerID="ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.863218 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952"} err="failed to get container status \"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952\": rpc error: code = NotFound desc = could not find container \"ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952\": container with ID starting with ec1500030ddf1cacccc1a41f95ca7dff8710384af9641456af1a8eba196fc952 not found: ID does not exist" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.863247 4745 scope.go:117] "RemoveContainer" containerID="4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e" Jan 21 12:28:08 crc kubenswrapper[4745]: E0121 12:28:08.863572 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e\": container with ID starting with 4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e not found: ID does not exist" containerID="4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.863601 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e"} err="failed to get container status \"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e\": rpc error: code = NotFound desc = could not find container \"4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e\": container with ID starting with 4bbd8a4a8c7b77cc4a2648d4ff2edc8202ff8cd6e3c15bca3691c9d43210060e not found: ID does not exist" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.863616 4745 scope.go:117] "RemoveContainer" containerID="a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1" Jan 21 12:28:08 crc kubenswrapper[4745]: E0121 12:28:08.864099 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1\": container with ID starting with a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1 not found: ID does not exist" containerID="a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1" Jan 21 12:28:08 crc kubenswrapper[4745]: I0121 12:28:08.864147 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1"} err="failed to get container status \"a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1\": rpc error: code = NotFound desc = could not find container \"a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1\": container with ID starting with a4880836fb32d8f81c97e895d4d1516149038976a6bd10e479f1dabd12ffe5f1 not found: ID does not exist" Jan 21 12:28:10 crc kubenswrapper[4745]: I0121 12:28:10.011372 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" path="/var/lib/kubelet/pods/40ffad66-a21f-4ef6-a842-22061a4eebe3/volumes" Jan 21 12:28:15 crc kubenswrapper[4745]: I0121 12:28:15.866253 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:28:15 crc kubenswrapper[4745]: I0121 12:28:15.866661 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:28:18 crc kubenswrapper[4745]: E0121 12:28:18.569372 4745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.78:51872->38.129.56.78:36213: write tcp 38.129.56.78:51872->38.129.56.78:36213: write: broken pipe Jan 21 12:28:45 crc kubenswrapper[4745]: I0121 12:28:45.866588 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:28:45 crc kubenswrapper[4745]: I0121 12:28:45.867187 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:29:15 crc kubenswrapper[4745]: I0121 12:29:15.866298 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:29:15 crc kubenswrapper[4745]: I0121 12:29:15.866814 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:29:15 crc kubenswrapper[4745]: I0121 12:29:15.866853 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:29:15 crc kubenswrapper[4745]: I0121 12:29:15.869127 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:29:15 crc kubenswrapper[4745]: I0121 12:29:15.869196 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" gracePeriod=600 Jan 21 12:29:16 crc kubenswrapper[4745]: E0121 12:29:16.015078 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:29:16 crc kubenswrapper[4745]: I0121 12:29:16.354247 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" exitCode=0 Jan 21 12:29:16 crc kubenswrapper[4745]: I0121 12:29:16.354299 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04"} Jan 21 12:29:16 crc kubenswrapper[4745]: I0121 12:29:16.354338 4745 scope.go:117] "RemoveContainer" containerID="e53f7aab0975b19e20e20ce19f6355a505f5e23791714a3ba8eb233b74a7ba45" Jan 21 12:29:16 crc kubenswrapper[4745]: I0121 12:29:16.355019 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:29:16 crc kubenswrapper[4745]: E0121 12:29:16.355320 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:29:31 crc kubenswrapper[4745]: I0121 12:29:31.000564 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:29:31 crc kubenswrapper[4745]: E0121 12:29:31.001411 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:29:45 crc kubenswrapper[4745]: I0121 12:29:45.000825 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:29:45 crc kubenswrapper[4745]: E0121 12:29:45.001654 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:29:57 crc kubenswrapper[4745]: I0121 12:29:57.000192 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:29:57 crc kubenswrapper[4745]: E0121 12:29:57.001929 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.200200 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh"] Jan 21 12:30:00 crc kubenswrapper[4745]: E0121 12:30:00.201058 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.201078 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4745]: E0121 12:30:00.201097 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.201107 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="extract-utilities" Jan 21 12:30:00 crc kubenswrapper[4745]: E0121 12:30:00.201122 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.201129 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="extract-content" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.201380 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="40ffad66-a21f-4ef6-a842-22061a4eebe3" containerName="registry-server" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.202209 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.211588 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh"] Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.272321 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.274190 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.350787 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.351259 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn4vg\" (UniqueName: \"kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.351326 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.453189 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.453365 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.453423 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn4vg\" (UniqueName: \"kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.454461 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.459195 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.477523 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn4vg\" (UniqueName: \"kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg\") pod \"collect-profiles-29483310-7p6jh\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.523847 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:00 crc kubenswrapper[4745]: I0121 12:30:00.983605 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh"] Jan 21 12:30:01 crc kubenswrapper[4745]: I0121 12:30:01.826126 4745 generic.go:334] "Generic (PLEG): container finished" podID="f56e490c-ed7b-4976-859a-6cd34a58530e" containerID="1b1ac57ac03f6dc3fb93e44d2459c14b04f0ed0696b942520e667b13b0b3d158" exitCode=0 Jan 21 12:30:01 crc kubenswrapper[4745]: I0121 12:30:01.826179 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" event={"ID":"f56e490c-ed7b-4976-859a-6cd34a58530e","Type":"ContainerDied","Data":"1b1ac57ac03f6dc3fb93e44d2459c14b04f0ed0696b942520e667b13b0b3d158"} Jan 21 12:30:01 crc kubenswrapper[4745]: I0121 12:30:01.826821 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" event={"ID":"f56e490c-ed7b-4976-859a-6cd34a58530e","Type":"ContainerStarted","Data":"a6afa1f8358e48d78e01b3231bd5683a9bc7016168daf82df256e6f10f68b6b8"} Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.140672 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.219729 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume\") pod \"f56e490c-ed7b-4976-859a-6cd34a58530e\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.219842 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn4vg\" (UniqueName: \"kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg\") pod \"f56e490c-ed7b-4976-859a-6cd34a58530e\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.219958 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume\") pod \"f56e490c-ed7b-4976-859a-6cd34a58530e\" (UID: \"f56e490c-ed7b-4976-859a-6cd34a58530e\") " Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.220672 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume" (OuterVolumeSpecName: "config-volume") pod "f56e490c-ed7b-4976-859a-6cd34a58530e" (UID: "f56e490c-ed7b-4976-859a-6cd34a58530e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.221409 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f56e490c-ed7b-4976-859a-6cd34a58530e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.225630 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f56e490c-ed7b-4976-859a-6cd34a58530e" (UID: "f56e490c-ed7b-4976-859a-6cd34a58530e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.226717 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg" (OuterVolumeSpecName: "kube-api-access-sn4vg") pod "f56e490c-ed7b-4976-859a-6cd34a58530e" (UID: "f56e490c-ed7b-4976-859a-6cd34a58530e"). InnerVolumeSpecName "kube-api-access-sn4vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.323622 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn4vg\" (UniqueName: \"kubernetes.io/projected/f56e490c-ed7b-4976-859a-6cd34a58530e-kube-api-access-sn4vg\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.323666 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f56e490c-ed7b-4976-859a-6cd34a58530e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.863978 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" event={"ID":"f56e490c-ed7b-4976-859a-6cd34a58530e","Type":"ContainerDied","Data":"a6afa1f8358e48d78e01b3231bd5683a9bc7016168daf82df256e6f10f68b6b8"} Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.864016 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6afa1f8358e48d78e01b3231bd5683a9bc7016168daf82df256e6f10f68b6b8" Jan 21 12:30:03 crc kubenswrapper[4745]: I0121 12:30:03.864077 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483310-7p6jh" Jan 21 12:30:04 crc kubenswrapper[4745]: I0121 12:30:04.230883 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j"] Jan 21 12:30:04 crc kubenswrapper[4745]: I0121 12:30:04.239203 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483265-f8m4j"] Jan 21 12:30:06 crc kubenswrapper[4745]: I0121 12:30:06.013524 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d229af8-05fa-419b-ac5f-7b6ff269389b" path="/var/lib/kubelet/pods/8d229af8-05fa-419b-ac5f-7b6ff269389b/volumes" Jan 21 12:30:08 crc kubenswrapper[4745]: I0121 12:30:08.000600 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:30:08 crc kubenswrapper[4745]: E0121 12:30:08.001609 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:19 crc kubenswrapper[4745]: I0121 12:30:19.000712 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:30:19 crc kubenswrapper[4745]: E0121 12:30:19.001370 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:32 crc kubenswrapper[4745]: I0121 12:30:32.000740 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:30:32 crc kubenswrapper[4745]: E0121 12:30:32.002073 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:46 crc kubenswrapper[4745]: I0121 12:30:46.009154 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:30:46 crc kubenswrapper[4745]: E0121 12:30:46.009979 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:58 crc kubenswrapper[4745]: I0121 12:30:58.000687 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:30:58 crc kubenswrapper[4745]: E0121 12:30:58.001934 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:30:58 crc kubenswrapper[4745]: I0121 12:30:58.952768 4745 scope.go:117] "RemoveContainer" containerID="5a8273d289ac9fe308464452031f92bd605a18e31d70abd1650de14659af30fc" Jan 21 12:31:10 crc kubenswrapper[4745]: I0121 12:31:10.001759 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:31:10 crc kubenswrapper[4745]: E0121 12:31:10.002686 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:31:25 crc kubenswrapper[4745]: I0121 12:31:25.000559 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:31:25 crc kubenswrapper[4745]: E0121 12:31:25.001465 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:31:40 crc kubenswrapper[4745]: I0121 12:31:40.000235 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:31:40 crc kubenswrapper[4745]: E0121 12:31:40.001243 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:31:51 crc kubenswrapper[4745]: I0121 12:31:51.000660 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:31:51 crc kubenswrapper[4745]: E0121 12:31:51.001413 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:32:02 crc kubenswrapper[4745]: I0121 12:32:02.001822 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:32:02 crc kubenswrapper[4745]: E0121 12:32:02.003003 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:32:17 crc kubenswrapper[4745]: I0121 12:32:17.001373 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:32:17 crc kubenswrapper[4745]: E0121 12:32:17.002332 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:32:31 crc kubenswrapper[4745]: I0121 12:32:31.001881 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:32:31 crc kubenswrapper[4745]: E0121 12:32:31.002637 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:32:43 crc kubenswrapper[4745]: I0121 12:32:43.001077 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:32:43 crc kubenswrapper[4745]: E0121 12:32:43.001931 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:32:58 crc kubenswrapper[4745]: I0121 12:32:58.001470 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:32:58 crc kubenswrapper[4745]: E0121 12:32:58.002405 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:33:10 crc kubenswrapper[4745]: I0121 12:33:10.000004 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:33:10 crc kubenswrapper[4745]: E0121 12:33:10.000871 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:33:21 crc kubenswrapper[4745]: I0121 12:33:21.000589 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:33:21 crc kubenswrapper[4745]: E0121 12:33:21.001258 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:33:35 crc kubenswrapper[4745]: I0121 12:33:35.000595 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:33:35 crc kubenswrapper[4745]: E0121 12:33:35.001368 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:33:49 crc kubenswrapper[4745]: I0121 12:33:49.000307 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:33:49 crc kubenswrapper[4745]: E0121 12:33:49.001399 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:34:01 crc kubenswrapper[4745]: I0121 12:34:00.999808 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:34:01 crc kubenswrapper[4745]: E0121 12:34:01.000508 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:34:15 crc kubenswrapper[4745]: I0121 12:34:15.000043 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:34:15 crc kubenswrapper[4745]: E0121 12:34:15.000771 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:34:28 crc kubenswrapper[4745]: I0121 12:34:28.000789 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:34:29 crc kubenswrapper[4745]: I0121 12:34:29.078620 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7"} Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.421835 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:33 crc kubenswrapper[4745]: E0121 12:34:33.422884 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f56e490c-ed7b-4976-859a-6cd34a58530e" containerName="collect-profiles" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.422901 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="f56e490c-ed7b-4976-859a-6cd34a58530e" containerName="collect-profiles" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.423151 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="f56e490c-ed7b-4976-859a-6cd34a58530e" containerName="collect-profiles" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.426003 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.432669 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.498360 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.498489 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp25k\" (UniqueName: \"kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.498510 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.601441 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.601578 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp25k\" (UniqueName: \"kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.601605 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.602289 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.602556 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.623339 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp25k\" (UniqueName: \"kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k\") pod \"community-operators-5sqfw\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:33 crc kubenswrapper[4745]: I0121 12:34:33.774212 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:34 crc kubenswrapper[4745]: I0121 12:34:34.325569 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:34 crc kubenswrapper[4745]: I0121 12:34:34.426575 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerStarted","Data":"1b2511d6ce60942c23e622779a3c44985faae009350c8c91dda4b77752258ffe"} Jan 21 12:34:35 crc kubenswrapper[4745]: I0121 12:34:35.437020 4745 generic.go:334] "Generic (PLEG): container finished" podID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerID="b557a463b7ae852f0ffb6859f29b4fb008fb6454892ddf83dba42b5cf0a9c934" exitCode=0 Jan 21 12:34:35 crc kubenswrapper[4745]: I0121 12:34:35.437125 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerDied","Data":"b557a463b7ae852f0ffb6859f29b4fb008fb6454892ddf83dba42b5cf0a9c934"} Jan 21 12:34:35 crc kubenswrapper[4745]: I0121 12:34:35.440653 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:34:36 crc kubenswrapper[4745]: I0121 12:34:36.447134 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerStarted","Data":"851bcd62d877f48425bffdc4b3fefb0da917f45b2b5c6187642ec0ef9107029b"} Jan 21 12:34:38 crc kubenswrapper[4745]: I0121 12:34:38.467295 4745 generic.go:334] "Generic (PLEG): container finished" podID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerID="851bcd62d877f48425bffdc4b3fefb0da917f45b2b5c6187642ec0ef9107029b" exitCode=0 Jan 21 12:34:38 crc kubenswrapper[4745]: I0121 12:34:38.467384 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerDied","Data":"851bcd62d877f48425bffdc4b3fefb0da917f45b2b5c6187642ec0ef9107029b"} Jan 21 12:34:39 crc kubenswrapper[4745]: I0121 12:34:39.483298 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerStarted","Data":"89ddf19b6b592b458d91e50b83ddee5f50b789bcf22f49a40776fd3023e600d7"} Jan 21 12:34:39 crc kubenswrapper[4745]: I0121 12:34:39.508116 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5sqfw" podStartSLOduration=3.020562888 podStartE2EDuration="6.507735549s" podCreationTimestamp="2026-01-21 12:34:33 +0000 UTC" firstStartedPulling="2026-01-21 12:34:35.440007014 +0000 UTC m=+7059.900794612" lastFinishedPulling="2026-01-21 12:34:38.927179655 +0000 UTC m=+7063.387967273" observedRunningTime="2026-01-21 12:34:39.506374951 +0000 UTC m=+7063.967162549" watchObservedRunningTime="2026-01-21 12:34:39.507735549 +0000 UTC m=+7063.968523147" Jan 21 12:34:43 crc kubenswrapper[4745]: I0121 12:34:43.517836 4745 generic.go:334] "Generic (PLEG): container finished" podID="58f0330f-8bbd-440b-8396-79f1976798af" containerID="cc316e9d31bb8db0baccb422f1fffe7c20ec226cb3ae73ea30fe41eae7a07b76" exitCode=1 Jan 21 12:34:43 crc kubenswrapper[4745]: I0121 12:34:43.517926 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"58f0330f-8bbd-440b-8396-79f1976798af","Type":"ContainerDied","Data":"cc316e9d31bb8db0baccb422f1fffe7c20ec226cb3ae73ea30fe41eae7a07b76"} Jan 21 12:34:43 crc kubenswrapper[4745]: I0121 12:34:43.774834 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:43 crc kubenswrapper[4745]: I0121 12:34:43.774898 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:43 crc kubenswrapper[4745]: I0121 12:34:43.827189 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:44 crc kubenswrapper[4745]: I0121 12:34:44.578898 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.191736 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273616 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273692 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273766 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273806 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ggfw5\" (UniqueName: \"kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273925 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273976 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.273990 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.274125 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.274180 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir\") pod \"58f0330f-8bbd-440b-8396-79f1976798af\" (UID: \"58f0330f-8bbd-440b-8396-79f1976798af\") " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.277676 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.278669 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data" (OuterVolumeSpecName: "config-data") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.281377 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.282607 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5" (OuterVolumeSpecName: "kube-api-access-ggfw5") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "kube-api-access-ggfw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.284269 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "test-operator-logs") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.306581 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.313277 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.327670 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.337418 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "58f0330f-8bbd-440b-8396-79f1976798af" (UID: "58f0330f-8bbd-440b-8396-79f1976798af"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377258 4745 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377303 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377318 4745 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/58f0330f-8bbd-440b-8396-79f1976798af-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377331 4745 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377450 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ggfw5\" (UniqueName: \"kubernetes.io/projected/58f0330f-8bbd-440b-8396-79f1976798af-kube-api-access-ggfw5\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377699 4745 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377752 4745 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/58f0330f-8bbd-440b-8396-79f1976798af-config-data\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377771 4745 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.377786 4745 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/58f0330f-8bbd-440b-8396-79f1976798af-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.403697 4745 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.480556 4745 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.539930 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"58f0330f-8bbd-440b-8396-79f1976798af","Type":"ContainerDied","Data":"7b48bd1471dfb040ee9cc8077e3c6fa9d4fcce58350b52499654ff709ba69019"} Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.539982 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 21 12:34:45 crc kubenswrapper[4745]: I0121 12:34:45.540685 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b48bd1471dfb040ee9cc8077e3c6fa9d4fcce58350b52499654ff709ba69019" Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.394357 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.396641 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5sqfw" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="registry-server" containerID="cri-o://89ddf19b6b592b458d91e50b83ddee5f50b789bcf22f49a40776fd3023e600d7" gracePeriod=2 Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.562204 4745 generic.go:334] "Generic (PLEG): container finished" podID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerID="89ddf19b6b592b458d91e50b83ddee5f50b789bcf22f49a40776fd3023e600d7" exitCode=0 Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.562278 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerDied","Data":"89ddf19b6b592b458d91e50b83ddee5f50b789bcf22f49a40776fd3023e600d7"} Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.819309 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.934156 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities\") pod \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.934227 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp25k\" (UniqueName: \"kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k\") pod \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.934367 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content\") pod \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\" (UID: \"985b9c72-42ab-4411-8d9a-0d4ab468f18d\") " Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.935727 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities" (OuterVolumeSpecName: "utilities") pod "985b9c72-42ab-4411-8d9a-0d4ab468f18d" (UID: "985b9c72-42ab-4411-8d9a-0d4ab468f18d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.942887 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k" (OuterVolumeSpecName: "kube-api-access-cp25k") pod "985b9c72-42ab-4411-8d9a-0d4ab468f18d" (UID: "985b9c72-42ab-4411-8d9a-0d4ab468f18d"). InnerVolumeSpecName "kube-api-access-cp25k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:34:47 crc kubenswrapper[4745]: I0121 12:34:47.991644 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "985b9c72-42ab-4411-8d9a-0d4ab468f18d" (UID: "985b9c72-42ab-4411-8d9a-0d4ab468f18d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.036081 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.036140 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp25k\" (UniqueName: \"kubernetes.io/projected/985b9c72-42ab-4411-8d9a-0d4ab468f18d-kube-api-access-cp25k\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.036150 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/985b9c72-42ab-4411-8d9a-0d4ab468f18d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.584890 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5sqfw" event={"ID":"985b9c72-42ab-4411-8d9a-0d4ab468f18d","Type":"ContainerDied","Data":"1b2511d6ce60942c23e622779a3c44985faae009350c8c91dda4b77752258ffe"} Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.584946 4745 scope.go:117] "RemoveContainer" containerID="89ddf19b6b592b458d91e50b83ddee5f50b789bcf22f49a40776fd3023e600d7" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.584946 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5sqfw" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.616777 4745 scope.go:117] "RemoveContainer" containerID="851bcd62d877f48425bffdc4b3fefb0da917f45b2b5c6187642ec0ef9107029b" Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.619689 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.633899 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5sqfw"] Jan 21 12:34:48 crc kubenswrapper[4745]: I0121 12:34:48.645811 4745 scope.go:117] "RemoveContainer" containerID="b557a463b7ae852f0ffb6859f29b4fb008fb6454892ddf83dba42b5cf0a9c934" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.019436 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" path="/var/lib/kubelet/pods/985b9c72-42ab-4411-8d9a-0d4ab468f18d/volumes" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.144392 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 12:34:50 crc kubenswrapper[4745]: E0121 12:34:50.145045 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="extract-utilities" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145065 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="extract-utilities" Jan 21 12:34:50 crc kubenswrapper[4745]: E0121 12:34:50.145078 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="registry-server" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145084 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="registry-server" Jan 21 12:34:50 crc kubenswrapper[4745]: E0121 12:34:50.145129 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="extract-content" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145136 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="extract-content" Jan 21 12:34:50 crc kubenswrapper[4745]: E0121 12:34:50.145151 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58f0330f-8bbd-440b-8396-79f1976798af" containerName="tempest-tests-tempest-tests-runner" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145157 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="58f0330f-8bbd-440b-8396-79f1976798af" containerName="tempest-tests-tempest-tests-runner" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145379 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="985b9c72-42ab-4411-8d9a-0d4ab468f18d" containerName="registry-server" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.145412 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="58f0330f-8bbd-440b-8396-79f1976798af" containerName="tempest-tests-tempest-tests-runner" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.146272 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.153229 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.168105 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-rqj79" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.279332 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.279435 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74bs6\" (UniqueName: \"kubernetes.io/projected/3ec6af6b-ebc1-4201-bceb-a8bc1907284a-kube-api-access-74bs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.383169 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.383249 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74bs6\" (UniqueName: \"kubernetes.io/projected/3ec6af6b-ebc1-4201-bceb-a8bc1907284a-kube-api-access-74bs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.385210 4745 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.415086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74bs6\" (UniqueName: \"kubernetes.io/projected/3ec6af6b-ebc1-4201-bceb-a8bc1907284a-kube-api-access-74bs6\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.424616 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"3ec6af6b-ebc1-4201-bceb-a8bc1907284a\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:50 crc kubenswrapper[4745]: I0121 12:34:50.478879 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 21 12:34:51 crc kubenswrapper[4745]: I0121 12:34:51.030522 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 21 12:34:51 crc kubenswrapper[4745]: I0121 12:34:51.630040 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3ec6af6b-ebc1-4201-bceb-a8bc1907284a","Type":"ContainerStarted","Data":"812fecb0fee3fd76b9a937b21d9fd069d0d5b6508529c564b44b3ff981cbc883"} Jan 21 12:34:52 crc kubenswrapper[4745]: I0121 12:34:52.642952 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"3ec6af6b-ebc1-4201-bceb-a8bc1907284a","Type":"ContainerStarted","Data":"3b0cdadfaea29a9e5821ae66d1d34c8f6fae6be1938a2389eacc11277f378eac"} Jan 21 12:34:52 crc kubenswrapper[4745]: I0121 12:34:52.667586 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.719012286 podStartE2EDuration="2.667564459s" podCreationTimestamp="2026-01-21 12:34:50 +0000 UTC" firstStartedPulling="2026-01-21 12:34:51.041177709 +0000 UTC m=+7075.501965317" lastFinishedPulling="2026-01-21 12:34:51.989729892 +0000 UTC m=+7076.450517490" observedRunningTime="2026-01-21 12:34:52.656473572 +0000 UTC m=+7077.117261170" watchObservedRunningTime="2026-01-21 12:34:52.667564459 +0000 UTC m=+7077.128352077" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.589601 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb6b2/must-gather-g4bpz"] Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.592169 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.599665 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hb6b2"/"openshift-service-ca.crt" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.608756 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hb6b2/must-gather-g4bpz"] Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.612224 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-hb6b2"/"kube-root-ca.crt" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.612384 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-hb6b2"/"default-dockercfg-4fbhb" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.675206 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.675301 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxs2d\" (UniqueName: \"kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.776954 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxs2d\" (UniqueName: \"kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.777110 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.777520 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.798498 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxs2d\" (UniqueName: \"kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d\") pod \"must-gather-g4bpz\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:30 crc kubenswrapper[4745]: I0121 12:35:30.916241 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:35:31 crc kubenswrapper[4745]: I0121 12:35:31.546512 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-hb6b2/must-gather-g4bpz"] Jan 21 12:35:32 crc kubenswrapper[4745]: I0121 12:35:32.077019 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" event={"ID":"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522","Type":"ContainerStarted","Data":"334c6a46aa3a1724ae13c50837aaf159fcea3bd1d443b80f9e74a3f9545a6345"} Jan 21 12:35:42 crc kubenswrapper[4745]: I0121 12:35:42.199757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" event={"ID":"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522","Type":"ContainerStarted","Data":"24840b63b2691cf235b0530fa5478355cd5d0f3b0144cc6f40de8048b0909da4"} Jan 21 12:35:43 crc kubenswrapper[4745]: I0121 12:35:43.215165 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" event={"ID":"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522","Type":"ContainerStarted","Data":"fd4172a33328a1d7937186c84f8454b8495c8c2b617ca91f10dc76b33a6501b8"} Jan 21 12:35:43 crc kubenswrapper[4745]: I0121 12:35:43.268773 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" podStartSLOduration=3.198858862 podStartE2EDuration="13.268744459s" podCreationTimestamp="2026-01-21 12:35:30 +0000 UTC" firstStartedPulling="2026-01-21 12:35:31.557527147 +0000 UTC m=+7116.018314745" lastFinishedPulling="2026-01-21 12:35:41.627412714 +0000 UTC m=+7126.088200342" observedRunningTime="2026-01-21 12:35:43.25401472 +0000 UTC m=+7127.714802328" watchObservedRunningTime="2026-01-21 12:35:43.268744459 +0000 UTC m=+7127.729532067" Jan 21 12:35:45 crc kubenswrapper[4745]: E0121 12:35:45.680499 4745 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.78:33516->38.129.56.78:36213: write tcp 38.129.56.78:33516->38.129.56.78:36213: write: broken pipe Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.753074 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-5bn7n"] Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.754900 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.782985 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-562tl\" (UniqueName: \"kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.783112 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.884620 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.884795 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-562tl\" (UniqueName: \"kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.884881 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:46 crc kubenswrapper[4745]: I0121 12:35:46.903052 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-562tl\" (UniqueName: \"kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl\") pod \"crc-debug-5bn7n\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:47 crc kubenswrapper[4745]: I0121 12:35:47.107939 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:35:47 crc kubenswrapper[4745]: W0121 12:35:47.147006 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5e978e6_ed8c_4ef9_ac6a_7d520636f1ee.slice/crio-6a61e74bd161d139c2cdcd2808c020e3c5582dfcb594061fd8606d9975038db9 WatchSource:0}: Error finding container 6a61e74bd161d139c2cdcd2808c020e3c5582dfcb594061fd8606d9975038db9: Status 404 returned error can't find the container with id 6a61e74bd161d139c2cdcd2808c020e3c5582dfcb594061fd8606d9975038db9 Jan 21 12:35:47 crc kubenswrapper[4745]: I0121 12:35:47.254146 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" event={"ID":"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee","Type":"ContainerStarted","Data":"6a61e74bd161d139c2cdcd2808c020e3c5582dfcb594061fd8606d9975038db9"} Jan 21 12:35:49 crc kubenswrapper[4745]: I0121 12:35:49.802053 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-657bb6888b-llfnx_e0da39e8-c4a0-47b5-8427-fb3b731cb0d4/barbican-api-log/0.log" Jan 21 12:35:49 crc kubenswrapper[4745]: I0121 12:35:49.817035 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-657bb6888b-llfnx_e0da39e8-c4a0-47b5-8427-fb3b731cb0d4/barbican-api/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.392733 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-795566bfc4-6vxf4_446eb8df-6f58-43b3-9c04-3741ac0f25a3/barbican-keystone-listener-log/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.436582 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-795566bfc4-6vxf4_446eb8df-6f58-43b3-9c04-3741ac0f25a3/barbican-keystone-listener/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.456153 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c9f6db4f9-qq29j_393b4909-d9ac-4852-9ccb-495be4b1b265/barbican-worker-log/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.463848 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-c9f6db4f9-qq29j_393b4909-d9ac-4852-9ccb-495be4b1b265/barbican-worker/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.532414 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-xnl5f_98ae5b1b-1fcf-4dbd-aeab-e9c831863408/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.614726 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3f51f01-ad12-40ab-a599-bca8a2eb5cec/ceilometer-central-agent/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.657722 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3f51f01-ad12-40ab-a599-bca8a2eb5cec/ceilometer-notification-agent/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.664739 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3f51f01-ad12-40ab-a599-bca8a2eb5cec/sg-core/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.689306 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_a3f51f01-ad12-40ab-a599-bca8a2eb5cec/proxy-httpd/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.709900 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7a564b0-2da4-4d9c-a8a2-e61604758a1f/cinder-api-log/0.log" Jan 21 12:35:50 crc kubenswrapper[4745]: I0121 12:35:50.876097 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_c7a564b0-2da4-4d9c-a8a2-e61604758a1f/cinder-api/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.006909 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6f4e5bfc-8f66-4654-a418-d08193e99884/cinder-scheduler/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.141914 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6f4e5bfc-8f66-4654-a418-d08193e99884/probe/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.181686 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tjm9n_7554614d-4696-4385-84d1-9dd2236effef/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.213272 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-blxw7_07b8c861-e874-4967-871b-5c6ca50791fa/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.383092 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b556b84c5-rkzsq_7eea2f90-9946-4eba-8eb8-f9e00472f0be/dnsmasq-dns/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.388516 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-b556b84c5-rkzsq_7eea2f90-9946-4eba-8eb8-f9e00472f0be/init/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.415260 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-nwc2n_1b0d7ba0-2c25-43bf-8762-013c96431756/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.430305 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e539d73b-d00d-45c7-967a-e084d68a78a5/glance-log/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.475381 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_e539d73b-d00d-45c7-967a-e084d68a78a5/glance-httpd/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.496140 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2d8d9e72-16c4-4372-8d2f-d116c68a4d2a/glance-log/0.log" Jan 21 12:35:51 crc kubenswrapper[4745]: I0121 12:35:51.547618 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_2d8d9e72-16c4-4372-8d2f-d116c68a4d2a/glance-httpd/0.log" Jan 21 12:35:52 crc kubenswrapper[4745]: I0121 12:35:52.409259 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-6d44b77d95-2fvz9_5a433daf-1db3-4263-9a10-28d03dc300b7/heat-api/0.log" Jan 21 12:35:53 crc kubenswrapper[4745]: I0121 12:35:53.271313 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-d56bdb47c-z8b9m_ad490541-95a2-46cd-97ef-7afa19e9e5f9/heat-cfnapi/0.log" Jan 21 12:35:53 crc kubenswrapper[4745]: I0121 12:35:53.317226 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-67ddbd4cb4-nt52k_d2c0cb53-c1c5-41eb-9b1a-0be362c4e80e/heat-engine/0.log" Jan 21 12:35:54 crc kubenswrapper[4745]: I0121 12:35:54.145938 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cdbfc4d4d-pm6ln_1b30531d-e957-4efd-b09c-d5d0b5fd1382/horizon-log/0.log" Jan 21 12:35:54 crc kubenswrapper[4745]: I0121 12:35:54.297427 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cdbfc4d4d-pm6ln_1b30531d-e957-4efd-b09c-d5d0b5fd1382/horizon/2.log" Jan 21 12:35:54 crc kubenswrapper[4745]: I0121 12:35:54.297930 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-5cdbfc4d4d-pm6ln_1b30531d-e957-4efd-b09c-d5d0b5fd1382/horizon/3.log" Jan 21 12:35:54 crc kubenswrapper[4745]: I0121 12:35:54.328097 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-njk5j_a051be73-e1d2-4233-8da1-847120a2fe1b/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:54 crc kubenswrapper[4745]: I0121 12:35:54.366041 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-tq4jv_73beacee-28b3-46c4-8643-74e53002ef5e/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:35:55 crc kubenswrapper[4745]: I0121 12:35:55.664744 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-59f65b95fd-mfxld_c1486472-15c0-432f-bca8-cf77403394f9/keystone-api/0.log" Jan 21 12:35:55 crc kubenswrapper[4745]: I0121 12:35:55.693318 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483221-n75hv_c9657c82-86ad-461b-af13-737409270945/keystone-cron/0.log" Jan 21 12:35:55 crc kubenswrapper[4745]: I0121 12:35:55.709555 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29483281-8vl8f_c5e0d2f0-c75a-43d7-bed6-b120867ccf85/keystone-cron/0.log" Jan 21 12:35:55 crc kubenswrapper[4745]: I0121 12:35:55.725917 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_3d17af5b-6f17-42ef-a3fc-ceec818bb54f/kube-state-metrics/0.log" Jan 21 12:35:55 crc kubenswrapper[4745]: I0121 12:35:55.776315 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-f4gt5_8df35d83-d69d-4747-b617-9ef2be130951/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:01 crc kubenswrapper[4745]: I0121 12:36:01.398316 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" event={"ID":"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee","Type":"ContainerStarted","Data":"9e3fe7f1c5532acd8ca1fa364aa035cd759d2e2da89d26cc20d8eb651e044c37"} Jan 21 12:36:01 crc kubenswrapper[4745]: I0121 12:36:01.430984 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" podStartSLOduration=1.958181265 podStartE2EDuration="15.430964491s" podCreationTimestamp="2026-01-21 12:35:46 +0000 UTC" firstStartedPulling="2026-01-21 12:35:47.149606838 +0000 UTC m=+7131.610394436" lastFinishedPulling="2026-01-21 12:36:00.622390064 +0000 UTC m=+7145.083177662" observedRunningTime="2026-01-21 12:36:01.42299007 +0000 UTC m=+7145.883777668" watchObservedRunningTime="2026-01-21 12:36:01.430964491 +0000 UTC m=+7145.891752089" Jan 21 12:36:07 crc kubenswrapper[4745]: I0121 12:36:07.090089 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9253af27-9c32-4977-9632-266bb434fd18/memcached/0.log" Jan 21 12:36:08 crc kubenswrapper[4745]: I0121 12:36:08.524256 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c4c669957-bh9tq_6b270428-536d-4f65-b13a-e52446574239/neutron-api/0.log" Jan 21 12:36:08 crc kubenswrapper[4745]: I0121 12:36:08.552446 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-6c4c669957-bh9tq_6b270428-536d-4f65-b13a-e52446574239/neutron-httpd/0.log" Jan 21 12:36:08 crc kubenswrapper[4745]: I0121 12:36:08.572476 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-48h2x_200916d8-adce-4f77-b2c2-44be9da69f65/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:09 crc kubenswrapper[4745]: I0121 12:36:09.169872 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_970d824e-3226-4ab9-a661-b1185dfe5dff/nova-api-log/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.231398 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_970d824e-3226-4ab9-a661-b1185dfe5dff/nova-api-api/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.436221 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f83e31f6-d723-448d-9b2d-dbb7c7d23447/nova-cell0-conductor-conductor/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.529079 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_498ecedf-353c-4497-98b8-202c4ce5dd29/nova-cell1-conductor-conductor/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.625335 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_7ce186c4-d95d-4846-bf1e-db0cc6952fac/nova-cell1-novncproxy-novncproxy/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.689483 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-gmznd_2fc7129c-3f8a-42cc-baf6-d499c5582e71/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:10 crc kubenswrapper[4745]: I0121 12:36:10.754016 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c/nova-metadata-log/0.log" Jan 21 12:36:11 crc kubenswrapper[4745]: I0121 12:36:11.380587 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/controller/0.log" Jan 21 12:36:11 crc kubenswrapper[4745]: I0121 12:36:11.406921 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/kube-rbac-proxy/0.log" Jan 21 12:36:11 crc kubenswrapper[4745]: I0121 12:36:11.452385 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/controller/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.241609 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_30f95a5f-17bd-4e0b-be4c-4fe9d5528d4c/nova-metadata-metadata/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.642358 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.656053 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/reloader/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.661541 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr-metrics/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.671843 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.693888 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy-frr/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.707261 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-frr-files/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.707797 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_1c9795b0-c473-4536-94e3-64e5dd44f230/nova-scheduler-scheduler/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.717290 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-reloader/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.730409 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0dd4138e-532c-446d-84ba-6bf954dfbd03/galera/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.736618 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-metrics/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.741004 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_0dd4138e-532c-446d-84ba-6bf954dfbd03/mysql-bootstrap/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.748144 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dq466_5e2a9cf8-053e-4225-b055-45d69ebfaa94/frr-k8s-webhook-server/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.766609 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c2b5df3e-a44d-42ff-96a4-2bfd32db45bf/galera/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.774123 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65d59f8cf8-8xqnr_cf161197-4160-49ab-a126-edca468534b7/manager/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.778207 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c2b5df3e-a44d-42ff-96a4-2bfd32db45bf/mysql-bootstrap/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.789613 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b7c494555-zdlbt_1be9da42-8db6-47b9-b7ec-788b04db264d/webhook-server/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.793747 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_a6aca1df-e09e-42d8-8046-be985160f75a/openstackclient/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.853281 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-q4h6w_a7d57467-feff-4abf-b152-11fe4647f21d/openstack-network-exporter/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.873196 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xs6fp_50f6a02e-ecd9-48c9-8332-806fda00af43/ovsdb-server/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.943224 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xs6fp_50f6a02e-ecd9-48c9-8332-806fda00af43/ovs-vswitchd/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.962424 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-xs6fp_50f6a02e-ecd9-48c9-8332-806fda00af43/ovsdb-server-init/0.log" Jan 21 12:36:14 crc kubenswrapper[4745]: I0121 12:36:14.999339 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-t8gd4_113ad23b-2a19-4cef-a99b-7b61d3e0779f/ovn-controller/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.049272 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-65fm7_7e88dbcd-044a-4c58-8069-54de2ea049c0/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.071933 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9455e114-6033-43af-960e-65da0f232984/ovn-northd/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.080178 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_9455e114-6033-43af-960e-65da0f232984/openstack-network-exporter/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.109160 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19556673-788b-4132-97fa-616a25a67fad/ovsdbserver-nb/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.122553 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_19556673-788b-4132-97fa-616a25a67fad/openstack-network-exporter/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.147449 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b78737fd-60ce-47e2-bfa8-92241cd4a475/ovsdbserver-sb/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.156082 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_b78737fd-60ce-47e2-bfa8-92241cd4a475/openstack-network-exporter/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.414917 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/speaker/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.424665 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/kube-rbac-proxy/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.470703 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b7b6cc58-8rqwl_f7dda9f1-400d-40c9-82a4-87b745d91803/placement-log/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.609950 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-9b7b6cc58-8rqwl_f7dda9f1-400d-40c9-82a4-87b745d91803/placement-api/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.643770 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ce38831c-0940-459f-a137-00ce0acbc5bd/rabbitmq/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.648204 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_ce38831c-0940-459f-a137-00ce0acbc5bd/setup-container/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.677635 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b8027b59-b371-4cd4-b4a1-da4073dc0b61/rabbitmq/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.682858 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_b8027b59-b371-4cd4-b4a1-da4073dc0b61/setup-container/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.701412 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-7bzjp_9ae3b0e8-dabd-4b52-91c9-55d4695f4660/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.712751 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-zbd64_743f1675-ea0d-4d4d-837b-82c6807bb12a/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.724247 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-8rksv_0ac74398-cfce-4a36-998c-057d617fe478/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.738569 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-q889s_6166dc52-4171-488c-99bb-f522c631efb0/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.755571 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-gpd6q_548fb3fd-319e-4b59-a233-afbb48300c3b/ssh-known-hosts-edpm-deployment/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.974975 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f7d6cccf-pmbj6_eb6ab3e8-65c0-4076-8633-485e6f678171/proxy-httpd/0.log" Jan 21 12:36:15 crc kubenswrapper[4745]: I0121 12:36:15.994926 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-84f7d6cccf-pmbj6_eb6ab3e8-65c0-4076-8633-485e6f678171/proxy-server/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.010972 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-gqgp7_47f91446-e767-4f28-b77a-e77a7b9cd842/swift-ring-rebalance/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.170078 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/account-server/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.219815 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/account-replicator/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.227000 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/account-auditor/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.236375 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/account-reaper/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.249391 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/container-server/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.302830 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/container-replicator/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.309379 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/container-auditor/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.325255 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/container-updater/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.343959 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/object-server/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.408451 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/object-replicator/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.448008 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/object-auditor/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.458245 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/object-updater/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.472005 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/object-expirer/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.496048 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/rsync/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.513837 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e3c32d66-7e7d-40dc-8726-2084e85452af/swift-recon-cron/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.585062 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-mwzmj_07e7aba1-1062-43a9-8a86-9b6ceba23c72/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.869844 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_7dc068ac-4289-4996-8263-d1db282282cd/tempest-tests-tempest-tests-runner/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.969902 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_58f0330f-8bbd-440b-8396-79f1976798af/tempest-tests-tempest-tests-runner/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.976579 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_3ec6af6b-ebc1-4201-bceb-a8bc1907284a/test-operator-logs-container/0.log" Jan 21 12:36:16 crc kubenswrapper[4745]: I0121 12:36:16.988328 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-652rw_e125e636-26d3-49e3-9e06-2c1a3cd106c9/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.474274 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/extract/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.483198 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/util/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.492349 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/pull/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.570449 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bqhjj_f99a5f65-e2aa-4476-b4c6-6566761f1ad2/manager/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.607554 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qcrlk_d9337025-a702-4dd2-b8a4-e807525a34f5/manager/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.657957 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-hw9zg_bc9be084-edd6-4556-88af-354f416d451c/manager/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.748319 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gntws_9ff19137-02fd-4de1-9601-95a5c0fbbed0/manager/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.820163 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-g4gpj_b28edf64-70dc-4fc2-8d7f-c1f141cbd31e/manager/0.log" Jan 21 12:36:32 crc kubenswrapper[4745]: I0121 12:36:32.847876 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sqhft_784904b1-a1d9-4319-be67-34e3dfdc1c9a/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.119405 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-4nt9f_2528950f-ec80-4609-a77c-d6fbb2768e3b/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.132917 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-clbcs_2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.222052 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fh7ts_fb04ba1c-d6a0-40aa-b985-f4715cb11257/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.238051 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-dvhql_dfb1f262-fe24-45bf-8f75-0e2a81989f3f/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.282471 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8xm9d_c0985a55-6ede-4214-87fe-27cb5668dd86/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.352395 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-x9mpf_42c37f0d-415a-4a72-ae98-07551477c6cc/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.450089 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g8j7m_be658ac1-07b6-482b-8b99-35a75fcf3b50/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.460308 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-bx656_a96f3189-7bbc-404d-ad6d-05b8fefb65fc/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.479938 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4_1f562ebe-222a-441b-9277-0aa69a0c0fb3/manager/0.log" Jan 21 12:36:33 crc kubenswrapper[4745]: I0121 12:36:33.609738 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-777994b6d8-xpq4v_8381ff45-ae46-437a-894e-1530d39397f8/operator/0.log" Jan 21 12:36:34 crc kubenswrapper[4745]: I0121 12:36:34.934342 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-78d57d4fdd-dxmll_8ed49bb1-d169-4518-b064-3fb35fd1bad0/manager/0.log" Jan 21 12:36:34 crc kubenswrapper[4745]: I0121 12:36:34.950605 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l4qmd_fa66bbac-12d5-40aa-b852-00ddac9637a1/registry-server/0.log" Jan 21 12:36:34 crc kubenswrapper[4745]: I0121 12:36:34.997090 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-j96sf_a292ef63-66c6-4416-8212-7b06a9bb8761/manager/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.030738 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8v4t6_ab348be4-f24d-41f5-947a-7f49dc330aa9/manager/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.056837 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s8zz8_1efe6d30-3c28-4945-8615-49cafec58641/operator/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.084898 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-46lz5_57b58631-9efc-4cdb-bb89-47aa70a6bd98/manager/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.144299 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dh2t4_dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19/manager/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.159288 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-q4ccb_10226f41-eb60-45bf-a116-c51f3de0ea39/manager/0.log" Jan 21 12:36:35 crc kubenswrapper[4745]: I0121 12:36:35.172323 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-bg5mt_94d1ae33-41a7-414c-b0d9-cc843ca9fa47/manager/0.log" Jan 21 12:36:41 crc kubenswrapper[4745]: I0121 12:36:41.586637 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nfgt5_1eb90eab-f69a-4fef-aef1-b8f4473b91fd/control-plane-machine-set-operator/0.log" Jan 21 12:36:41 crc kubenswrapper[4745]: I0121 12:36:41.601437 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dfzgf_b8f5958e-78cf-428c-b9c0-abae011b2de4/kube-rbac-proxy/0.log" Jan 21 12:36:41 crc kubenswrapper[4745]: I0121 12:36:41.615838 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dfzgf_b8f5958e-78cf-428c-b9c0-abae011b2de4/machine-api-operator/0.log" Jan 21 12:36:45 crc kubenswrapper[4745]: I0121 12:36:45.867005 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:36:45 crc kubenswrapper[4745]: I0121 12:36:45.868061 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:36:50 crc kubenswrapper[4745]: I0121 12:36:50.857245 4745 generic.go:334] "Generic (PLEG): container finished" podID="c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" containerID="9e3fe7f1c5532acd8ca1fa364aa035cd759d2e2da89d26cc20d8eb651e044c37" exitCode=0 Jan 21 12:36:50 crc kubenswrapper[4745]: I0121 12:36:50.857342 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" event={"ID":"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee","Type":"ContainerDied","Data":"9e3fe7f1c5532acd8ca1fa364aa035cd759d2e2da89d26cc20d8eb651e044c37"} Jan 21 12:36:51 crc kubenswrapper[4745]: I0121 12:36:51.988525 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.031435 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-5bn7n"] Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.040651 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-5bn7n"] Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.139387 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-562tl\" (UniqueName: \"kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl\") pod \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.139468 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host\") pod \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\" (UID: \"c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee\") " Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.139667 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host" (OuterVolumeSpecName: "host") pod "c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" (UID: "c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.141258 4745 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.146293 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl" (OuterVolumeSpecName: "kube-api-access-562tl") pod "c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" (UID: "c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee"). InnerVolumeSpecName "kube-api-access-562tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.244003 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-562tl\" (UniqueName: \"kubernetes.io/projected/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee-kube-api-access-562tl\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.894123 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a61e74bd161d139c2cdcd2808c020e3c5582dfcb594061fd8606d9975038db9" Jan 21 12:36:52 crc kubenswrapper[4745]: I0121 12:36:52.894206 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-5bn7n" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.201002 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-scnv6"] Jan 21 12:36:53 crc kubenswrapper[4745]: E0121 12:36:53.201489 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" containerName="container-00" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.201505 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" containerName="container-00" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.201788 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" containerName="container-00" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.202581 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.364358 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.364505 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfkgf\" (UniqueName: \"kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.466406 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.466575 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.466708 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfkgf\" (UniqueName: \"kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.490294 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfkgf\" (UniqueName: \"kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf\") pod \"crc-debug-scnv6\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.520372 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.905004 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" event={"ID":"e22accd4-2460-4739-ae2a-6c9f20b03bd1","Type":"ContainerStarted","Data":"9def2a8c90053562b908bcb2b0a6fab74743eee79a1242eb81f35aa040a04536"} Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.905379 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" event={"ID":"e22accd4-2460-4739-ae2a-6c9f20b03bd1","Type":"ContainerStarted","Data":"13040930b8d4f32fbd830a85f3da6a196b9bae84fc14ca52ea4e63dc3a49dc1d"} Jan 21 12:36:53 crc kubenswrapper[4745]: I0121 12:36:53.928513 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" podStartSLOduration=0.928490808 podStartE2EDuration="928.490808ms" podCreationTimestamp="2026-01-21 12:36:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:36:53.921083022 +0000 UTC m=+7198.381870630" watchObservedRunningTime="2026-01-21 12:36:53.928490808 +0000 UTC m=+7198.389278406" Jan 21 12:36:54 crc kubenswrapper[4745]: I0121 12:36:54.019133 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee" path="/var/lib/kubelet/pods/c5e978e6-ed8c-4ef9-ac6a-7d520636f1ee/volumes" Jan 21 12:36:54 crc kubenswrapper[4745]: I0121 12:36:54.913000 4745 generic.go:334] "Generic (PLEG): container finished" podID="e22accd4-2460-4739-ae2a-6c9f20b03bd1" containerID="9def2a8c90053562b908bcb2b0a6fab74743eee79a1242eb81f35aa040a04536" exitCode=0 Jan 21 12:36:54 crc kubenswrapper[4745]: I0121 12:36:54.913199 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" event={"ID":"e22accd4-2460-4739-ae2a-6c9f20b03bd1","Type":"ContainerDied","Data":"9def2a8c90053562b908bcb2b0a6fab74743eee79a1242eb81f35aa040a04536"} Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.028803 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.108176 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host\") pod \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.108322 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfkgf\" (UniqueName: \"kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf\") pod \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\" (UID: \"e22accd4-2460-4739-ae2a-6c9f20b03bd1\") " Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.109447 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host" (OuterVolumeSpecName: "host") pod "e22accd4-2460-4739-ae2a-6c9f20b03bd1" (UID: "e22accd4-2460-4739-ae2a-6c9f20b03bd1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.116425 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf" (OuterVolumeSpecName: "kube-api-access-jfkgf") pod "e22accd4-2460-4739-ae2a-6c9f20b03bd1" (UID: "e22accd4-2460-4739-ae2a-6c9f20b03bd1"). InnerVolumeSpecName "kube-api-access-jfkgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.210318 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfkgf\" (UniqueName: \"kubernetes.io/projected/e22accd4-2460-4739-ae2a-6c9f20b03bd1-kube-api-access-jfkgf\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.210981 4745 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e22accd4-2460-4739-ae2a-6c9f20b03bd1-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.580084 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-scnv6"] Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.590702 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-scnv6"] Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.937776 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13040930b8d4f32fbd830a85f3da6a196b9bae84fc14ca52ea4e63dc3a49dc1d" Jan 21 12:36:56 crc kubenswrapper[4745]: I0121 12:36:56.937886 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-scnv6" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.748140 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-978xd"] Jan 21 12:36:57 crc kubenswrapper[4745]: E0121 12:36:57.749905 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e22accd4-2460-4739-ae2a-6c9f20b03bd1" containerName="container-00" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.750007 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="e22accd4-2460-4739-ae2a-6c9f20b03bd1" containerName="container-00" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.750312 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="e22accd4-2460-4739-ae2a-6c9f20b03bd1" containerName="container-00" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.751132 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.839613 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5w9v\" (UniqueName: \"kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.839747 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.941270 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5w9v\" (UniqueName: \"kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.941389 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.941663 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:57 crc kubenswrapper[4745]: I0121 12:36:57.961816 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5w9v\" (UniqueName: \"kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v\") pod \"crc-debug-978xd\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:58 crc kubenswrapper[4745]: I0121 12:36:58.011765 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e22accd4-2460-4739-ae2a-6c9f20b03bd1" path="/var/lib/kubelet/pods/e22accd4-2460-4739-ae2a-6c9f20b03bd1/volumes" Jan 21 12:36:58 crc kubenswrapper[4745]: I0121 12:36:58.071345 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:36:58 crc kubenswrapper[4745]: W0121 12:36:58.101305 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd9c5a61a_32a2_479c_aaf5_ba20adaa689e.slice/crio-5a7063772a763e7f215aa7aee2932aa83c3eb9b3c4906631c5ca1fdf1e13d7e6 WatchSource:0}: Error finding container 5a7063772a763e7f215aa7aee2932aa83c3eb9b3c4906631c5ca1fdf1e13d7e6: Status 404 returned error can't find the container with id 5a7063772a763e7f215aa7aee2932aa83c3eb9b3c4906631c5ca1fdf1e13d7e6 Jan 21 12:36:58 crc kubenswrapper[4745]: I0121 12:36:58.955335 4745 generic.go:334] "Generic (PLEG): container finished" podID="d9c5a61a-32a2-479c-aaf5-ba20adaa689e" containerID="299b066be7512b0ee45ac4bc4f67d066660c97a727f302c3eed750b7ade68a25" exitCode=0 Jan 21 12:36:58 crc kubenswrapper[4745]: I0121 12:36:58.955395 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-978xd" event={"ID":"d9c5a61a-32a2-479c-aaf5-ba20adaa689e","Type":"ContainerDied","Data":"299b066be7512b0ee45ac4bc4f67d066660c97a727f302c3eed750b7ade68a25"} Jan 21 12:36:58 crc kubenswrapper[4745]: I0121 12:36:58.955430 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/crc-debug-978xd" event={"ID":"d9c5a61a-32a2-479c-aaf5-ba20adaa689e","Type":"ContainerStarted","Data":"5a7063772a763e7f215aa7aee2932aa83c3eb9b3c4906631c5ca1fdf1e13d7e6"} Jan 21 12:36:59 crc kubenswrapper[4745]: I0121 12:36:59.005174 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-978xd"] Jan 21 12:36:59 crc kubenswrapper[4745]: I0121 12:36:59.016824 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb6b2/crc-debug-978xd"] Jan 21 12:36:59 crc kubenswrapper[4745]: I0121 12:36:59.615014 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-s5t4j_60b550eb-7b13-4042-99c2-70f21e9ec81f/cert-manager-controller/0.log" Jan 21 12:36:59 crc kubenswrapper[4745]: I0121 12:36:59.642269 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rgtt2_6f55bdba-45e5-485d-ae8f-a8576885b3ff/cert-manager-cainjector/0.log" Jan 21 12:36:59 crc kubenswrapper[4745]: I0121 12:36:59.653161 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7xg5s_28ac8429-55e4-4387-99d2-f20e654f0dde/cert-manager-webhook/0.log" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.066883 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.186236 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host\") pod \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.186366 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5w9v\" (UniqueName: \"kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v\") pod \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\" (UID: \"d9c5a61a-32a2-479c-aaf5-ba20adaa689e\") " Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.186362 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host" (OuterVolumeSpecName: "host") pod "d9c5a61a-32a2-479c-aaf5-ba20adaa689e" (UID: "d9c5a61a-32a2-479c-aaf5-ba20adaa689e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.186892 4745 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-host\") on node \"crc\" DevicePath \"\"" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.191517 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v" (OuterVolumeSpecName: "kube-api-access-p5w9v") pod "d9c5a61a-32a2-479c-aaf5-ba20adaa689e" (UID: "d9c5a61a-32a2-479c-aaf5-ba20adaa689e"). InnerVolumeSpecName "kube-api-access-p5w9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.288515 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5w9v\" (UniqueName: \"kubernetes.io/projected/d9c5a61a-32a2-479c-aaf5-ba20adaa689e-kube-api-access-p5w9v\") on node \"crc\" DevicePath \"\"" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.973486 4745 scope.go:117] "RemoveContainer" containerID="299b066be7512b0ee45ac4bc4f67d066660c97a727f302c3eed750b7ade68a25" Jan 21 12:37:00 crc kubenswrapper[4745]: I0121 12:37:00.973548 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/crc-debug-978xd" Jan 21 12:37:02 crc kubenswrapper[4745]: I0121 12:37:02.024100 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c5a61a-32a2-479c-aaf5-ba20adaa689e" path="/var/lib/kubelet/pods/d9c5a61a-32a2-479c-aaf5-ba20adaa689e/volumes" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.531768 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-54v72_5f632930-37d6-4083-80d2-e56d394f5289/nmstate-console-plugin/0.log" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.557703 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bpmz2_976354ad-a346-409e-893a-d8edb62a6148/nmstate-handler/0.log" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.570089 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9t5nq_02756c63-b6cc-42ef-ba04-fbd6127ccfa7/nmstate-metrics/0.log" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.581801 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9t5nq_02756c63-b6cc-42ef-ba04-fbd6127ccfa7/kube-rbac-proxy/0.log" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.598052 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-27x5h_26a2f875-6a73-4039-b234-7f628c77bdda/nmstate-operator/0.log" Jan 21 12:37:05 crc kubenswrapper[4745]: I0121 12:37:05.612920 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-k4fch_89a613eb-ec6f-48dc-97d8-38e59281d04e/nmstate-webhook/0.log" Jan 21 12:37:15 crc kubenswrapper[4745]: I0121 12:37:15.866360 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:37:15 crc kubenswrapper[4745]: I0121 12:37:15.867065 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:37:17 crc kubenswrapper[4745]: I0121 12:37:17.222220 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/controller/0.log" Jan 21 12:37:17 crc kubenswrapper[4745]: I0121 12:37:17.234244 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/kube-rbac-proxy/0.log" Jan 21 12:37:17 crc kubenswrapper[4745]: I0121 12:37:17.257136 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/controller/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.194416 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.206758 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/reloader/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.212007 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr-metrics/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.222290 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.230146 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy-frr/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.237971 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-frr-files/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.246712 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-reloader/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.253867 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-metrics/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.267126 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dq466_5e2a9cf8-053e-4225-b055-45d69ebfaa94/frr-k8s-webhook-server/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.295191 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65d59f8cf8-8xqnr_cf161197-4160-49ab-a126-edca468534b7/manager/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.308167 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b7c494555-zdlbt_1be9da42-8db6-47b9-b7ec-788b04db264d/webhook-server/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.756258 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/speaker/0.log" Jan 21 12:37:19 crc kubenswrapper[4745]: I0121 12:37:19.768598 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/kube-rbac-proxy/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.673359 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st_52ff2424-850f-47fd-a0c4-fc91fca87048/extract/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.682021 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st_52ff2424-850f-47fd-a0c4-fc91fca87048/util/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.705804 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dckc6st_52ff2424-850f-47fd-a0c4-fc91fca87048/pull/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.717005 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk_ad178c32-02aa-40e7-aa77-35e3c5b9bd0e/extract/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.732396 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk_ad178c32-02aa-40e7-aa77-35e3c5b9bd0e/util/0.log" Jan 21 12:37:23 crc kubenswrapper[4745]: I0121 12:37:23.738795 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713c49qk_ad178c32-02aa-40e7-aa77-35e3c5b9bd0e/pull/0.log" Jan 21 12:37:24 crc kubenswrapper[4745]: I0121 12:37:24.716344 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-57bm9_ec4cd655-4062-4058-9de3-81d9ebb11d1b/registry-server/0.log" Jan 21 12:37:24 crc kubenswrapper[4745]: I0121 12:37:24.726013 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-57bm9_ec4cd655-4062-4058-9de3-81d9ebb11d1b/extract-utilities/0.log" Jan 21 12:37:24 crc kubenswrapper[4745]: I0121 12:37:24.735891 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-57bm9_ec4cd655-4062-4058-9de3-81d9ebb11d1b/extract-content/0.log" Jan 21 12:37:25 crc kubenswrapper[4745]: I0121 12:37:25.700771 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6nq8p_b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1/registry-server/0.log" Jan 21 12:37:25 crc kubenswrapper[4745]: I0121 12:37:25.706632 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6nq8p_b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1/extract-utilities/0.log" Jan 21 12:37:25 crc kubenswrapper[4745]: I0121 12:37:25.714213 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-6nq8p_b7eef9c4-acd6-448d-86e3-4cad0f6b5cd1/extract-content/0.log" Jan 21 12:37:25 crc kubenswrapper[4745]: I0121 12:37:25.739585 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-gkrg9_b3ae4633-cf73-4280-8cac-28ff7399bede/marketplace-operator/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.041083 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8f8b_ca113352-0f64-44d4-93d8-250df55bef46/registry-server/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.047701 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8f8b_ca113352-0f64-44d4-93d8-250df55bef46/extract-utilities/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.055093 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-w8f8b_ca113352-0f64-44d4-93d8-250df55bef46/extract-content/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.974254 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2q52q_f7d2344c-d406-471d-aafd-7b04d5ed29cf/registry-server/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.980279 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2q52q_f7d2344c-d406-471d-aafd-7b04d5ed29cf/extract-utilities/0.log" Jan 21 12:37:26 crc kubenswrapper[4745]: I0121 12:37:26.987255 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-2q52q_f7d2344c-d406-471d-aafd-7b04d5ed29cf/extract-content/0.log" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.803193 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:36 crc kubenswrapper[4745]: E0121 12:37:36.804403 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c5a61a-32a2-479c-aaf5-ba20adaa689e" containerName="container-00" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.804419 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c5a61a-32a2-479c-aaf5-ba20adaa689e" containerName="container-00" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.804658 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c5a61a-32a2-479c-aaf5-ba20adaa689e" containerName="container-00" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.812292 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.858152 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.928329 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.928677 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:36 crc kubenswrapper[4745]: I0121 12:37:36.928850 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfp8t\" (UniqueName: \"kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.030502 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.030591 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.030662 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfp8t\" (UniqueName: \"kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.031018 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.032086 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.067404 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfp8t\" (UniqueName: \"kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t\") pod \"certified-operators-9nfj6\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:37 crc kubenswrapper[4745]: I0121 12:37:37.178487 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:38 crc kubenswrapper[4745]: I0121 12:37:37.855885 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:38 crc kubenswrapper[4745]: I0121 12:37:38.335509 4745 generic.go:334] "Generic (PLEG): container finished" podID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerID="41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121" exitCode=0 Jan 21 12:37:38 crc kubenswrapper[4745]: I0121 12:37:38.336420 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerDied","Data":"41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121"} Jan 21 12:37:38 crc kubenswrapper[4745]: I0121 12:37:38.336594 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerStarted","Data":"c965c1aa660393cb37d3867afa6d00fa5fc7f6541d5d7d4b5b7a1233e8dd05a7"} Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.347509 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerStarted","Data":"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4"} Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.785885 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.798962 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.799082 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.958782 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll9fz\" (UniqueName: \"kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.958835 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:39 crc kubenswrapper[4745]: I0121 12:37:39.958892 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.061152 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.061304 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.061510 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll9fz\" (UniqueName: \"kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.061788 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.062058 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.085236 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll9fz\" (UniqueName: \"kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz\") pod \"redhat-operators-tgbfw\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.123454 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.372349 4745 generic.go:334] "Generic (PLEG): container finished" podID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerID="898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4" exitCode=0 Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.372591 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerDied","Data":"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4"} Jan 21 12:37:40 crc kubenswrapper[4745]: I0121 12:37:40.654049 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:37:41 crc kubenswrapper[4745]: I0121 12:37:41.382910 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerStarted","Data":"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672"} Jan 21 12:37:41 crc kubenswrapper[4745]: I0121 12:37:41.384776 4745 generic.go:334] "Generic (PLEG): container finished" podID="328fd1db-9178-49e6-ab32-15989e163353" containerID="3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54" exitCode=0 Jan 21 12:37:41 crc kubenswrapper[4745]: I0121 12:37:41.384825 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerDied","Data":"3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54"} Jan 21 12:37:41 crc kubenswrapper[4745]: I0121 12:37:41.384878 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerStarted","Data":"7603354feef6a91871b4f1917f2a53ff45526eea871cb55ac5edd00469a98fcb"} Jan 21 12:37:41 crc kubenswrapper[4745]: I0121 12:37:41.426991 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9nfj6" podStartSLOduration=2.8813111879999997 podStartE2EDuration="5.426969358s" podCreationTimestamp="2026-01-21 12:37:36 +0000 UTC" firstStartedPulling="2026-01-21 12:37:38.342580895 +0000 UTC m=+7242.803368493" lastFinishedPulling="2026-01-21 12:37:40.888239065 +0000 UTC m=+7245.349026663" observedRunningTime="2026-01-21 12:37:41.408672601 +0000 UTC m=+7245.869460189" watchObservedRunningTime="2026-01-21 12:37:41.426969358 +0000 UTC m=+7245.887756956" Jan 21 12:37:43 crc kubenswrapper[4745]: I0121 12:37:43.410182 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerStarted","Data":"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5"} Jan 21 12:37:45 crc kubenswrapper[4745]: I0121 12:37:45.866828 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:37:45 crc kubenswrapper[4745]: I0121 12:37:45.867188 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:37:45 crc kubenswrapper[4745]: I0121 12:37:45.867256 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:37:45 crc kubenswrapper[4745]: I0121 12:37:45.948285 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:37:45 crc kubenswrapper[4745]: I0121 12:37:45.948482 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7" gracePeriod=600 Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.178806 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.179195 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.450620 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7" exitCode=0 Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.450727 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7"} Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.450769 4745 scope.go:117] "RemoveContainer" containerID="e22bd7254605ff8276ad63aad03cfa65e544cebd658a9de8d64f4919104b6b04" Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.453215 4745 generic.go:334] "Generic (PLEG): container finished" podID="328fd1db-9178-49e6-ab32-15989e163353" containerID="dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5" exitCode=0 Jan 21 12:37:47 crc kubenswrapper[4745]: I0121 12:37:47.453260 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerDied","Data":"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5"} Jan 21 12:37:48 crc kubenswrapper[4745]: I0121 12:37:48.228401 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-9nfj6" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="registry-server" probeResult="failure" output=< Jan 21 12:37:48 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:37:48 crc kubenswrapper[4745]: > Jan 21 12:37:48 crc kubenswrapper[4745]: I0121 12:37:48.464137 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8"} Jan 21 12:37:48 crc kubenswrapper[4745]: I0121 12:37:48.466890 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerStarted","Data":"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9"} Jan 21 12:37:48 crc kubenswrapper[4745]: I0121 12:37:48.510002 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tgbfw" podStartSLOduration=3.062095589 podStartE2EDuration="9.509979503s" podCreationTimestamp="2026-01-21 12:37:39 +0000 UTC" firstStartedPulling="2026-01-21 12:37:41.386567307 +0000 UTC m=+7245.847354905" lastFinishedPulling="2026-01-21 12:37:47.834451221 +0000 UTC m=+7252.295238819" observedRunningTime="2026-01-21 12:37:48.501983191 +0000 UTC m=+7252.962770789" watchObservedRunningTime="2026-01-21 12:37:48.509979503 +0000 UTC m=+7252.970767101" Jan 21 12:37:50 crc kubenswrapper[4745]: I0121 12:37:50.124469 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:50 crc kubenswrapper[4745]: I0121 12:37:50.124778 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:37:51 crc kubenswrapper[4745]: I0121 12:37:51.189143 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tgbfw" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" probeResult="failure" output=< Jan 21 12:37:51 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:37:51 crc kubenswrapper[4745]: > Jan 21 12:37:57 crc kubenswrapper[4745]: I0121 12:37:57.242194 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:57 crc kubenswrapper[4745]: I0121 12:37:57.307214 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:57 crc kubenswrapper[4745]: I0121 12:37:57.495393 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:58 crc kubenswrapper[4745]: I0121 12:37:58.572366 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9nfj6" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="registry-server" containerID="cri-o://30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672" gracePeriod=2 Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.132731 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.281915 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities\") pod \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.281964 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfp8t\" (UniqueName: \"kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t\") pod \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.282118 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content\") pod \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\" (UID: \"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8\") " Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.284502 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities" (OuterVolumeSpecName: "utilities") pod "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" (UID: "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.321899 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t" (OuterVolumeSpecName: "kube-api-access-jfp8t") pod "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" (UID: "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8"). InnerVolumeSpecName "kube-api-access-jfp8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.371428 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" (UID: "3bd7c55a-3334-485e-adb0-0c09d4d1b3e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.384313 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.384364 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfp8t\" (UniqueName: \"kubernetes.io/projected/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-kube-api-access-jfp8t\") on node \"crc\" DevicePath \"\"" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.384379 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.583443 4745 generic.go:334] "Generic (PLEG): container finished" podID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerID="30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672" exitCode=0 Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.583492 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerDied","Data":"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672"} Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.583574 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9nfj6" event={"ID":"3bd7c55a-3334-485e-adb0-0c09d4d1b3e8","Type":"ContainerDied","Data":"c965c1aa660393cb37d3867afa6d00fa5fc7f6541d5d7d4b5b7a1233e8dd05a7"} Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.583603 4745 scope.go:117] "RemoveContainer" containerID="30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.583616 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9nfj6" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.613904 4745 scope.go:117] "RemoveContainer" containerID="898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.628524 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.640184 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9nfj6"] Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.645617 4745 scope.go:117] "RemoveContainer" containerID="41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.723995 4745 scope.go:117] "RemoveContainer" containerID="30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672" Jan 21 12:37:59 crc kubenswrapper[4745]: E0121 12:37:59.725247 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672\": container with ID starting with 30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672 not found: ID does not exist" containerID="30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.725285 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672"} err="failed to get container status \"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672\": rpc error: code = NotFound desc = could not find container \"30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672\": container with ID starting with 30316b82f18c558dd68bd006a4635865acff3c41d7650e1942cbec40c6a3f672 not found: ID does not exist" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.725310 4745 scope.go:117] "RemoveContainer" containerID="898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4" Jan 21 12:37:59 crc kubenswrapper[4745]: E0121 12:37:59.726432 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4\": container with ID starting with 898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4 not found: ID does not exist" containerID="898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.726506 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4"} err="failed to get container status \"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4\": rpc error: code = NotFound desc = could not find container \"898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4\": container with ID starting with 898fa058e27d14d0b80a72bb6b7d1ba3312ca70a07b8d8f75ae0cc9c37ed60d4 not found: ID does not exist" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.726567 4745 scope.go:117] "RemoveContainer" containerID="41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121" Jan 21 12:37:59 crc kubenswrapper[4745]: E0121 12:37:59.727710 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121\": container with ID starting with 41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121 not found: ID does not exist" containerID="41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121" Jan 21 12:37:59 crc kubenswrapper[4745]: I0121 12:37:59.727781 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121"} err="failed to get container status \"41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121\": rpc error: code = NotFound desc = could not find container \"41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121\": container with ID starting with 41e9130a1b5b73e853116f145ae5c32167f8d14328611ce8435700c9ce377121 not found: ID does not exist" Jan 21 12:38:00 crc kubenswrapper[4745]: I0121 12:38:00.011999 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" path="/var/lib/kubelet/pods/3bd7c55a-3334-485e-adb0-0c09d4d1b3e8/volumes" Jan 21 12:38:01 crc kubenswrapper[4745]: I0121 12:38:01.168279 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-tgbfw" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" probeResult="failure" output=< Jan 21 12:38:01 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:38:01 crc kubenswrapper[4745]: > Jan 21 12:38:10 crc kubenswrapper[4745]: I0121 12:38:10.206998 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:38:10 crc kubenswrapper[4745]: I0121 12:38:10.282593 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:38:10 crc kubenswrapper[4745]: I0121 12:38:10.985379 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:38:11 crc kubenswrapper[4745]: I0121 12:38:11.754419 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tgbfw" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" containerID="cri-o://2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9" gracePeriod=2 Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.506507 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.690914 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content\") pod \"328fd1db-9178-49e6-ab32-15989e163353\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.690984 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ll9fz\" (UniqueName: \"kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz\") pod \"328fd1db-9178-49e6-ab32-15989e163353\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.691248 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities\") pod \"328fd1db-9178-49e6-ab32-15989e163353\" (UID: \"328fd1db-9178-49e6-ab32-15989e163353\") " Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.691996 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities" (OuterVolumeSpecName: "utilities") pod "328fd1db-9178-49e6-ab32-15989e163353" (UID: "328fd1db-9178-49e6-ab32-15989e163353"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.700538 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz" (OuterVolumeSpecName: "kube-api-access-ll9fz") pod "328fd1db-9178-49e6-ab32-15989e163353" (UID: "328fd1db-9178-49e6-ab32-15989e163353"). InnerVolumeSpecName "kube-api-access-ll9fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.774503 4745 generic.go:334] "Generic (PLEG): container finished" podID="328fd1db-9178-49e6-ab32-15989e163353" containerID="2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9" exitCode=0 Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.774604 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tgbfw" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.775757 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerDied","Data":"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9"} Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.775886 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tgbfw" event={"ID":"328fd1db-9178-49e6-ab32-15989e163353","Type":"ContainerDied","Data":"7603354feef6a91871b4f1917f2a53ff45526eea871cb55ac5edd00469a98fcb"} Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.775974 4745 scope.go:117] "RemoveContainer" containerID="2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.794067 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.794108 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ll9fz\" (UniqueName: \"kubernetes.io/projected/328fd1db-9178-49e6-ab32-15989e163353-kube-api-access-ll9fz\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.809786 4745 scope.go:117] "RemoveContainer" containerID="dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.849707 4745 scope.go:117] "RemoveContainer" containerID="3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.916657 4745 scope.go:117] "RemoveContainer" containerID="2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9" Jan 21 12:38:12 crc kubenswrapper[4745]: E0121 12:38:12.919900 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9\": container with ID starting with 2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9 not found: ID does not exist" containerID="2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.920341 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9"} err="failed to get container status \"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9\": rpc error: code = NotFound desc = could not find container \"2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9\": container with ID starting with 2f8fd2efae63c36e4f001dd5ae8ddfc74efc2af5e01aa9dd5b3d1e3d9226bba9 not found: ID does not exist" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.920429 4745 scope.go:117] "RemoveContainer" containerID="dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5" Jan 21 12:38:12 crc kubenswrapper[4745]: E0121 12:38:12.931225 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5\": container with ID starting with dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5 not found: ID does not exist" containerID="dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.931425 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5"} err="failed to get container status \"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5\": rpc error: code = NotFound desc = could not find container \"dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5\": container with ID starting with dd4616d6c1b9623a0fb3e91edc0d83f11cb145a411b18fd46b6e15ae9ae715f5 not found: ID does not exist" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.931516 4745 scope.go:117] "RemoveContainer" containerID="3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54" Jan 21 12:38:12 crc kubenswrapper[4745]: E0121 12:38:12.931949 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54\": container with ID starting with 3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54 not found: ID does not exist" containerID="3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.932032 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54"} err="failed to get container status \"3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54\": rpc error: code = NotFound desc = could not find container \"3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54\": container with ID starting with 3e6b59feaff6bf1c9649ecfd4aab53bbc4200fc1dee0b207f8a6778d67096f54 not found: ID does not exist" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.965053 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "328fd1db-9178-49e6-ab32-15989e163353" (UID: "328fd1db-9178-49e6-ab32-15989e163353"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:38:12 crc kubenswrapper[4745]: I0121 12:38:12.999954 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/328fd1db-9178-49e6-ab32-15989e163353-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:13 crc kubenswrapper[4745]: I0121 12:38:13.118100 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:38:13 crc kubenswrapper[4745]: I0121 12:38:13.132743 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tgbfw"] Jan 21 12:38:14 crc kubenswrapper[4745]: I0121 12:38:14.030777 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="328fd1db-9178-49e6-ab32-15989e163353" path="/var/lib/kubelet/pods/328fd1db-9178-49e6-ab32-15989e163353/volumes" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.833060 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834234 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="extract-content" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834258 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="extract-content" Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834278 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834286 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834314 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="extract-utilities" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834324 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="extract-utilities" Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834339 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834349 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834371 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="extract-content" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834379 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="extract-content" Jan 21 12:38:16 crc kubenswrapper[4745]: E0121 12:38:16.834409 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="extract-utilities" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834420 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="extract-utilities" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834669 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd7c55a-3334-485e-adb0-0c09d4d1b3e8" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.834707 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="328fd1db-9178-49e6-ab32-15989e163353" containerName="registry-server" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.836468 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.858505 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.980350 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.980515 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7rcq\" (UniqueName: \"kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:16 crc kubenswrapper[4745]: I0121 12:38:16.980660 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.082246 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.082338 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7rcq\" (UniqueName: \"kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.082431 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.083372 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.083419 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.126307 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7rcq\" (UniqueName: \"kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq\") pod \"redhat-marketplace-8srwq\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.166863 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:17 crc kubenswrapper[4745]: I0121 12:38:17.915032 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:18 crc kubenswrapper[4745]: I0121 12:38:18.826710 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerID="a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee" exitCode=0 Jan 21 12:38:18 crc kubenswrapper[4745]: I0121 12:38:18.826816 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerDied","Data":"a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee"} Jan 21 12:38:18 crc kubenswrapper[4745]: I0121 12:38:18.826995 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerStarted","Data":"bbd60261f40718141d1c5e7e88e011f4f0bfc607d869648a7fbd97d4c1c18554"} Jan 21 12:38:19 crc kubenswrapper[4745]: I0121 12:38:19.836903 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerStarted","Data":"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd"} Jan 21 12:38:20 crc kubenswrapper[4745]: I0121 12:38:20.863343 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerID="e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd" exitCode=0 Jan 21 12:38:20 crc kubenswrapper[4745]: I0121 12:38:20.864103 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerDied","Data":"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd"} Jan 21 12:38:21 crc kubenswrapper[4745]: I0121 12:38:21.874651 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerStarted","Data":"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2"} Jan 21 12:38:21 crc kubenswrapper[4745]: I0121 12:38:21.897758 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8srwq" podStartSLOduration=3.433965051 podStartE2EDuration="5.897735509s" podCreationTimestamp="2026-01-21 12:38:16 +0000 UTC" firstStartedPulling="2026-01-21 12:38:18.829261046 +0000 UTC m=+7283.290048644" lastFinishedPulling="2026-01-21 12:38:21.293031504 +0000 UTC m=+7285.753819102" observedRunningTime="2026-01-21 12:38:21.893191242 +0000 UTC m=+7286.353978840" watchObservedRunningTime="2026-01-21 12:38:21.897735509 +0000 UTC m=+7286.358523107" Jan 21 12:38:27 crc kubenswrapper[4745]: I0121 12:38:27.168609 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:27 crc kubenswrapper[4745]: I0121 12:38:27.168918 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:27 crc kubenswrapper[4745]: I0121 12:38:27.263856 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:27 crc kubenswrapper[4745]: I0121 12:38:27.965005 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:28 crc kubenswrapper[4745]: I0121 12:38:28.021172 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:29 crc kubenswrapper[4745]: I0121 12:38:29.932844 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8srwq" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="registry-server" containerID="cri-o://f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2" gracePeriod=2 Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.431676 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.592102 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities\") pod \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.592213 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7rcq\" (UniqueName: \"kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq\") pod \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.592366 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content\") pod \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\" (UID: \"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41\") " Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.597802 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities" (OuterVolumeSpecName: "utilities") pod "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" (UID: "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.625786 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq" (OuterVolumeSpecName: "kube-api-access-z7rcq") pod "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" (UID: "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41"). InnerVolumeSpecName "kube-api-access-z7rcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.656492 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" (UID: "7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.694543 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.694589 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7rcq\" (UniqueName: \"kubernetes.io/projected/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-kube-api-access-z7rcq\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.694600 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.944423 4745 generic.go:334] "Generic (PLEG): container finished" podID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerID="f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2" exitCode=0 Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.944467 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerDied","Data":"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2"} Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.944490 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8srwq" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.944506 4745 scope.go:117] "RemoveContainer" containerID="f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.944494 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8srwq" event={"ID":"7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41","Type":"ContainerDied","Data":"bbd60261f40718141d1c5e7e88e011f4f0bfc607d869648a7fbd97d4c1c18554"} Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.974380 4745 scope.go:117] "RemoveContainer" containerID="e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd" Jan 21 12:38:30 crc kubenswrapper[4745]: I0121 12:38:30.980762 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.025778 4745 scope.go:117] "RemoveContainer" containerID="a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.035842 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8srwq"] Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.072158 4745 scope.go:117] "RemoveContainer" containerID="f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2" Jan 21 12:38:31 crc kubenswrapper[4745]: E0121 12:38:31.073964 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2\": container with ID starting with f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2 not found: ID does not exist" containerID="f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.073997 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2"} err="failed to get container status \"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2\": rpc error: code = NotFound desc = could not find container \"f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2\": container with ID starting with f9bc8ce9ec99d2f1dc50583de16a4065577ad1485a9d46d0ead8a7e1352cdaf2 not found: ID does not exist" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.074019 4745 scope.go:117] "RemoveContainer" containerID="e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd" Jan 21 12:38:31 crc kubenswrapper[4745]: E0121 12:38:31.075170 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd\": container with ID starting with e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd not found: ID does not exist" containerID="e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.075349 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd"} err="failed to get container status \"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd\": rpc error: code = NotFound desc = could not find container \"e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd\": container with ID starting with e91ef80eb9592bd20a31956308322602259dbdc5bc53e3abf5ff5406273ca8dd not found: ID does not exist" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.075365 4745 scope.go:117] "RemoveContainer" containerID="a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee" Jan 21 12:38:31 crc kubenswrapper[4745]: E0121 12:38:31.075583 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee\": container with ID starting with a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee not found: ID does not exist" containerID="a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee" Jan 21 12:38:31 crc kubenswrapper[4745]: I0121 12:38:31.075604 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee"} err="failed to get container status \"a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee\": rpc error: code = NotFound desc = could not find container \"a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee\": container with ID starting with a07a6bf8db0e9945f30af2f0917615b3652e7a0b88624101dc519fe0f7b0d4ee not found: ID does not exist" Jan 21 12:38:32 crc kubenswrapper[4745]: I0121 12:38:32.011665 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" path="/var/lib/kubelet/pods/7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41/volumes" Jan 21 12:39:12 crc kubenswrapper[4745]: I0121 12:39:12.860608 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/controller/0.log" Jan 21 12:39:12 crc kubenswrapper[4745]: I0121 12:39:12.867056 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-lgq6w_ad7637e4-fd78-447b-98ea-20af5f3c5c2a/kube-rbac-proxy/0.log" Jan 21 12:39:12 crc kubenswrapper[4745]: I0121 12:39:12.886446 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/controller/0.log" Jan 21 12:39:13 crc kubenswrapper[4745]: I0121 12:39:13.085777 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-s5t4j_60b550eb-7b13-4042-99c2-70f21e9ec81f/cert-manager-controller/0.log" Jan 21 12:39:13 crc kubenswrapper[4745]: I0121 12:39:13.105097 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rgtt2_6f55bdba-45e5-485d-ae8f-a8576885b3ff/cert-manager-cainjector/0.log" Jan 21 12:39:13 crc kubenswrapper[4745]: I0121 12:39:13.119945 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7xg5s_28ac8429-55e4-4387-99d2-f20e654f0dde/cert-manager-webhook/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.492472 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.505589 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/reloader/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.514899 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/frr-metrics/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.517989 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/extract/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.523091 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.533585 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/util/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.534065 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/kube-rbac-proxy-frr/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.544022 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-frr-files/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.547101 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/pull/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.572639 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-reloader/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.582193 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9f9vp_db2f79cd-c6c7-459f-bf98-002583ba5ddd/cp-metrics/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.603948 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-dq466_5e2a9cf8-053e-4225-b055-45d69ebfaa94/frr-k8s-webhook-server/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.618668 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bqhjj_f99a5f65-e2aa-4476-b4c6-6566761f1ad2/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.629892 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-65d59f8cf8-8xqnr_cf161197-4160-49ab-a126-edca468534b7/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.652346 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b7c494555-zdlbt_1be9da42-8db6-47b9-b7ec-788b04db264d/webhook-server/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.657291 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qcrlk_d9337025-a702-4dd2-b8a4-e807525a34f5/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.670764 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-hw9zg_bc9be084-edd6-4556-88af-354f416d451c/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.834824 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gntws_9ff19137-02fd-4de1-9601-95a5c0fbbed0/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.935091 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-g4gpj_b28edf64-70dc-4fc2-8d7f-c1f141cbd31e/manager/0.log" Jan 21 12:39:14 crc kubenswrapper[4745]: I0121 12:39:14.979746 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sqhft_784904b1-a1d9-4319-be67-34e3dfdc1c9a/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.291384 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/speaker/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.303243 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-64hm8_88871d5a-093a-41c6-98bf-629e6769ba71/kube-rbac-proxy/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.385578 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-4nt9f_2528950f-ec80-4609-a77c-d6fbb2768e3b/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.399676 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-clbcs_2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.488841 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fh7ts_fb04ba1c-d6a0-40aa-b985-f4715cb11257/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.501798 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-dvhql_dfb1f262-fe24-45bf-8f75-0e2a81989f3f/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.549326 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8xm9d_c0985a55-6ede-4214-87fe-27cb5668dd86/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.614937 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-x9mpf_42c37f0d-415a-4a72-ae98-07551477c6cc/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.707521 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g8j7m_be658ac1-07b6-482b-8b99-35a75fcf3b50/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.717952 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-bx656_a96f3189-7bbc-404d-ad6d-05b8fefb65fc/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.744679 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4_1f562ebe-222a-441b-9277-0aa69a0c0fb3/manager/0.log" Jan 21 12:39:15 crc kubenswrapper[4745]: I0121 12:39:15.865493 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-777994b6d8-xpq4v_8381ff45-ae46-437a-894e-1530d39397f8/operator/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.032090 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-78d57d4fdd-dxmll_8ed49bb1-d169-4518-b064-3fb35fd1bad0/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.047613 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l4qmd_fa66bbac-12d5-40aa-b852-00ddac9637a1/registry-server/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.081204 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-s5t4j_60b550eb-7b13-4042-99c2-70f21e9ec81f/cert-manager-controller/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.104395 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-rgtt2_6f55bdba-45e5-485d-ae8f-a8576885b3ff/cert-manager-cainjector/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.115010 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-7xg5s_28ac8429-55e4-4387-99d2-f20e654f0dde/cert-manager-webhook/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.126595 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-j96sf_a292ef63-66c6-4416-8212-7b06a9bb8761/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.153190 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8v4t6_ab348be4-f24d-41f5-947a-7f49dc330aa9/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.172155 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s8zz8_1efe6d30-3c28-4945-8615-49cafec58641/operator/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.192160 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-46lz5_57b58631-9efc-4cdb-bb89-47aa70a6bd98/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.248503 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dh2t4_dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.260111 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-q4ccb_10226f41-eb60-45bf-a116-c51f3de0ea39/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.271949 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-bg5mt_94d1ae33-41a7-414c-b0d9-cc843ca9fa47/manager/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.946222 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-nfgt5_1eb90eab-f69a-4fef-aef1-b8f4473b91fd/control-plane-machine-set-operator/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.968422 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dfzgf_b8f5958e-78cf-428c-b9c0-abae011b2de4/kube-rbac-proxy/0.log" Jan 21 12:39:17 crc kubenswrapper[4745]: I0121 12:39:17.977880 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-dfzgf_b8f5958e-78cf-428c-b9c0-abae011b2de4/machine-api-operator/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.829962 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-54v72_5f632930-37d6-4083-80d2-e56d394f5289/nmstate-console-plugin/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.853033 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-bpmz2_976354ad-a346-409e-893a-d8edb62a6148/nmstate-handler/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.867712 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9t5nq_02756c63-b6cc-42ef-ba04-fbd6127ccfa7/nmstate-metrics/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.876794 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-9t5nq_02756c63-b6cc-42ef-ba04-fbd6127ccfa7/kube-rbac-proxy/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.887399 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/extract/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.889425 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-27x5h_26a2f875-6a73-4039-b234-7f628c77bdda/nmstate-operator/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.896158 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/util/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.902427 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-k4fch_89a613eb-ec6f-48dc-97d8-38e59281d04e/nmstate-webhook/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.903145 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_78b144ef57b1dfd337e16c8eed6180bcab4a3a8dcac948995607e0ed23tp7s2_e386ddd7-8bcd-4130-b5f8-1ec63b3c515a/pull/0.log" Jan 21 12:39:18 crc kubenswrapper[4745]: I0121 12:39:18.984027 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7ddb5c749-bqhjj_f99a5f65-e2aa-4476-b4c6-6566761f1ad2/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.023226 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-9b68f5989-qcrlk_d9337025-a702-4dd2-b8a4-e807525a34f5/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.039296 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-9f958b845-hw9zg_bc9be084-edd6-4556-88af-354f416d451c/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.121584 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-c6994669c-gntws_9ff19137-02fd-4de1-9601-95a5c0fbbed0/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.187061 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-g4gpj_b28edf64-70dc-4fc2-8d7f-c1f141cbd31e/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.210936 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-sqhft_784904b1-a1d9-4319-be67-34e3dfdc1c9a/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.498435 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-77c48c7859-4nt9f_2528950f-ec80-4609-a77c-d6fbb2768e3b/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.511834 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-78757b4889-clbcs_2134ae1d-74cb-4b1e-a2e7-f9aab5bdc462/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.590337 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-767fdc4f47-fh7ts_fb04ba1c-d6a0-40aa-b985-f4715cb11257/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.600941 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-864f6b75bf-dvhql_dfb1f262-fe24-45bf-8f75-0e2a81989f3f/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.636332 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-c87fff755-8xm9d_c0985a55-6ede-4214-87fe-27cb5668dd86/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.688087 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-cb4666565-x9mpf_42c37f0d-415a-4a72-ae98-07551477c6cc/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.781338 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-65849867d6-g8j7m_be658ac1-07b6-482b-8b99-35a75fcf3b50/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.799345 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7fc9b76cf6-bx656_a96f3189-7bbc-404d-ad6d-05b8fefb65fc/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.813316 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b854ml2q4_1f562ebe-222a-441b-9277-0aa69a0c0fb3/manager/0.log" Jan 21 12:39:19 crc kubenswrapper[4745]: I0121 12:39:19.903557 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-777994b6d8-xpq4v_8381ff45-ae46-437a-894e-1530d39397f8/operator/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.182133 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-78d57d4fdd-dxmll_8ed49bb1-d169-4518-b064-3fb35fd1bad0/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.200582 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l4qmd_fa66bbac-12d5-40aa-b852-00ddac9637a1/registry-server/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.258989 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-55db956ddc-j96sf_a292ef63-66c6-4416-8212-7b06a9bb8761/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.293209 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-686df47fcb-8v4t6_ab348be4-f24d-41f5-947a-7f49dc330aa9/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.312011 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-s8zz8_1efe6d30-3c28-4945-8615-49cafec58641/operator/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.334570 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-85dd56d4cc-46lz5_57b58631-9efc-4cdb-bb89-47aa70a6bd98/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.402555 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5f8f495fcf-dh2t4_dcb0c83f-93ab-4dcd-abc6-a9b99b8c6c19/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.426810 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-7cd8bc9dbb-q4ccb_10226f41-eb60-45bf-a116-c51f3de0ea39/manager/0.log" Jan 21 12:39:21 crc kubenswrapper[4745]: I0121 12:39:21.440135 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-64cd966744-bg5mt_94d1ae33-41a7-414c-b0d9-cc843ca9fa47/manager/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.667466 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/kube-multus-additional-cni-plugins/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.685901 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/egress-router-binary-copy/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.695059 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/cni-plugins/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.702470 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/bond-cni-plugin/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.713920 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/routeoverride-cni/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.726928 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/whereabouts-cni-bincopy/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.733986 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-pnnzc_37687014-8686-4419-980d-e754a7f7037f/whereabouts-cni/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.770051 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-dhkkd_ea889c30-b820-47fa-8232-f96ed56ba8e1/multus-admission-controller/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.787085 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-dhkkd_ea889c30-b820-47fa-8232-f96ed56ba8e1/kube-rbac-proxy/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.833146 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/2.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.930744 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p8q45_25458900-3da2-4c9d-8463-9acde2add0a6/kube-multus/3.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.966405 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-px52r_df21a803-8072-4f8f-8f3a-00267f9c3419/network-metrics-daemon/0.log" Jan 21 12:39:23 crc kubenswrapper[4745]: I0121 12:39:23.972213 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-px52r_df21a803-8072-4f8f-8f3a-00267f9c3419/kube-rbac-proxy/0.log" Jan 21 12:40:15 crc kubenswrapper[4745]: I0121 12:40:15.867067 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:40:15 crc kubenswrapper[4745]: I0121 12:40:15.867476 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:40:45 crc kubenswrapper[4745]: I0121 12:40:45.866777 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:40:45 crc kubenswrapper[4745]: I0121 12:40:45.867246 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:41:15 crc kubenswrapper[4745]: I0121 12:41:15.867142 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:41:15 crc kubenswrapper[4745]: I0121 12:41:15.868489 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:41:15 crc kubenswrapper[4745]: I0121 12:41:15.869121 4745 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" Jan 21 12:41:15 crc kubenswrapper[4745]: I0121 12:41:15.871063 4745 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8"} pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 21 12:41:15 crc kubenswrapper[4745]: I0121 12:41:15.871166 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" containerID="cri-o://cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" gracePeriod=600 Jan 21 12:41:16 crc kubenswrapper[4745]: E0121 12:41:16.023791 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:41:16 crc kubenswrapper[4745]: I0121 12:41:16.569593 4745 generic.go:334] "Generic (PLEG): container finished" podID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" exitCode=0 Jan 21 12:41:16 crc kubenswrapper[4745]: I0121 12:41:16.569640 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerDied","Data":"cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8"} Jan 21 12:41:16 crc kubenswrapper[4745]: I0121 12:41:16.569675 4745 scope.go:117] "RemoveContainer" containerID="85539d27e9372360a7e1ae69ec8f1ac0bf3b97a0b8949368acf6d172b6f2ebe7" Jan 21 12:41:16 crc kubenswrapper[4745]: I0121 12:41:16.570446 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:41:16 crc kubenswrapper[4745]: E0121 12:41:16.570919 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:41:30 crc kubenswrapper[4745]: I0121 12:41:30.001142 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:41:30 crc kubenswrapper[4745]: E0121 12:41:30.002259 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:41:42 crc kubenswrapper[4745]: I0121 12:41:42.001332 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:41:42 crc kubenswrapper[4745]: E0121 12:41:42.005010 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:41:55 crc kubenswrapper[4745]: I0121 12:41:55.001107 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:41:55 crc kubenswrapper[4745]: E0121 12:41:55.001983 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:08 crc kubenswrapper[4745]: I0121 12:42:08.005251 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:42:08 crc kubenswrapper[4745]: E0121 12:42:08.006244 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:19 crc kubenswrapper[4745]: I0121 12:42:19.000458 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:42:19 crc kubenswrapper[4745]: E0121 12:42:19.003245 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:30 crc kubenswrapper[4745]: I0121 12:42:30.000882 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:42:30 crc kubenswrapper[4745]: E0121 12:42:30.002010 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:41 crc kubenswrapper[4745]: I0121 12:42:41.000748 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:42:41 crc kubenswrapper[4745]: E0121 12:42:41.001445 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:52 crc kubenswrapper[4745]: I0121 12:42:52.000211 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:42:52 crc kubenswrapper[4745]: E0121 12:42:52.001013 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:42:59 crc kubenswrapper[4745]: I0121 12:42:59.480931 4745 scope.go:117] "RemoveContainer" containerID="9e3fe7f1c5532acd8ca1fa364aa035cd759d2e2da89d26cc20d8eb651e044c37" Jan 21 12:42:59 crc kubenswrapper[4745]: I0121 12:42:59.558509 4745 scope.go:117] "RemoveContainer" containerID="9def2a8c90053562b908bcb2b0a6fab74743eee79a1242eb81f35aa040a04536" Jan 21 12:43:03 crc kubenswrapper[4745]: I0121 12:43:03.000367 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:43:03 crc kubenswrapper[4745]: E0121 12:43:03.001204 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:43:15 crc kubenswrapper[4745]: I0121 12:43:15.000732 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:43:15 crc kubenswrapper[4745]: E0121 12:43:15.001495 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:43:29 crc kubenswrapper[4745]: I0121 12:43:29.000178 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:43:29 crc kubenswrapper[4745]: E0121 12:43:29.001271 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:43:43 crc kubenswrapper[4745]: I0121 12:43:43.000378 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:43:43 crc kubenswrapper[4745]: E0121 12:43:43.001311 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:43:54 crc kubenswrapper[4745]: I0121 12:43:54.000799 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:43:54 crc kubenswrapper[4745]: E0121 12:43:54.001896 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:07 crc kubenswrapper[4745]: I0121 12:44:07.001290 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:44:07 crc kubenswrapper[4745]: E0121 12:44:07.002337 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:18 crc kubenswrapper[4745]: I0121 12:44:18.001470 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:44:18 crc kubenswrapper[4745]: E0121 12:44:18.002594 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:33 crc kubenswrapper[4745]: I0121 12:44:33.000613 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:44:33 crc kubenswrapper[4745]: E0121 12:44:33.001366 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:44 crc kubenswrapper[4745]: I0121 12:44:44.000960 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:44:44 crc kubenswrapper[4745]: E0121 12:44:44.002069 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.662757 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:44:53 crc kubenswrapper[4745]: E0121 12:44:53.663758 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="extract-content" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.663775 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="extract-content" Jan 21 12:44:53 crc kubenswrapper[4745]: E0121 12:44:53.663811 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="extract-utilities" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.663820 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="extract-utilities" Jan 21 12:44:53 crc kubenswrapper[4745]: E0121 12:44:53.663855 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="registry-server" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.663865 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="registry-server" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.664101 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d2ddd46-16e3-453a-8fd8-dafcf4b3bc41" containerName="registry-server" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.668256 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.695192 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.748672 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.748721 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crptk\" (UniqueName: \"kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.748870 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.850671 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.850874 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.850899 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crptk\" (UniqueName: \"kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.851416 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.851453 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.874506 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crptk\" (UniqueName: \"kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk\") pod \"community-operators-tw2cw\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:53 crc kubenswrapper[4745]: I0121 12:44:53.991196 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:44:54 crc kubenswrapper[4745]: I0121 12:44:54.659980 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:44:54 crc kubenswrapper[4745]: I0121 12:44:54.880795 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerStarted","Data":"07f727d46bf410b967f4799e14a72c1c2240b1450e40317f5a3fd8ba1e7a347e"} Jan 21 12:44:55 crc kubenswrapper[4745]: I0121 12:44:55.890102 4745 generic.go:334] "Generic (PLEG): container finished" podID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerID="a29ec2365a961cb654982fc384c07e72e51a2be554f95782bc5a858a8ed59e62" exitCode=0 Jan 21 12:44:55 crc kubenswrapper[4745]: I0121 12:44:55.890158 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerDied","Data":"a29ec2365a961cb654982fc384c07e72e51a2be554f95782bc5a858a8ed59e62"} Jan 21 12:44:55 crc kubenswrapper[4745]: I0121 12:44:55.892214 4745 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 21 12:44:56 crc kubenswrapper[4745]: I0121 12:44:56.900795 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerStarted","Data":"ca4bc9f8f127ad81824af88bf7ad0bb36b7d9d9286198c25f3d824de29bb872c"} Jan 21 12:44:58 crc kubenswrapper[4745]: I0121 12:44:58.924891 4745 generic.go:334] "Generic (PLEG): container finished" podID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerID="ca4bc9f8f127ad81824af88bf7ad0bb36b7d9d9286198c25f3d824de29bb872c" exitCode=0 Jan 21 12:44:58 crc kubenswrapper[4745]: I0121 12:44:58.924960 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerDied","Data":"ca4bc9f8f127ad81824af88bf7ad0bb36b7d9d9286198c25f3d824de29bb872c"} Jan 21 12:44:59 crc kubenswrapper[4745]: I0121 12:44:59.000620 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:44:59 crc kubenswrapper[4745]: E0121 12:44:59.000872 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:44:59 crc kubenswrapper[4745]: I0121 12:44:59.938732 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerStarted","Data":"54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7"} Jan 21 12:44:59 crc kubenswrapper[4745]: I0121 12:44:59.961387 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tw2cw" podStartSLOduration=3.51034477 podStartE2EDuration="6.96136909s" podCreationTimestamp="2026-01-21 12:44:53 +0000 UTC" firstStartedPulling="2026-01-21 12:44:55.891701778 +0000 UTC m=+7680.352489376" lastFinishedPulling="2026-01-21 12:44:59.342726098 +0000 UTC m=+7683.803513696" observedRunningTime="2026-01-21 12:44:59.95920666 +0000 UTC m=+7684.419994258" watchObservedRunningTime="2026-01-21 12:44:59.96136909 +0000 UTC m=+7684.422156688" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.225116 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk"] Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.226330 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.234681 4745 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.234689 4745 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.238980 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk"] Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.306923 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.307031 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.307058 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxv78\" (UniqueName: \"kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.408444 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.408495 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxv78\" (UniqueName: \"kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.408663 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:00 crc kubenswrapper[4745]: I0121 12:45:00.409664 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:01 crc kubenswrapper[4745]: I0121 12:45:01.261675 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:01 crc kubenswrapper[4745]: I0121 12:45:01.262156 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxv78\" (UniqueName: \"kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78\") pod \"collect-profiles-29483325-s4xvk\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:01 crc kubenswrapper[4745]: I0121 12:45:01.444454 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:02 crc kubenswrapper[4745]: I0121 12:45:02.326115 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk"] Jan 21 12:45:02 crc kubenswrapper[4745]: I0121 12:45:02.964566 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" event={"ID":"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3","Type":"ContainerStarted","Data":"86007659dfe6d1190deb90d3dfe022f4a63418919471bc71323496342a43b0ec"} Jan 21 12:45:02 crc kubenswrapper[4745]: I0121 12:45:02.964904 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" event={"ID":"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3","Type":"ContainerStarted","Data":"a1ca6e8f9611c2ad9f3c7564f30957d665627eaaf6402d79f74cb3682d306f79"} Jan 21 12:45:02 crc kubenswrapper[4745]: I0121 12:45:02.986841 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" podStartSLOduration=2.986817838 podStartE2EDuration="2.986817838s" podCreationTimestamp="2026-01-21 12:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-21 12:45:02.979038202 +0000 UTC m=+7687.439825800" watchObservedRunningTime="2026-01-21 12:45:02.986817838 +0000 UTC m=+7687.447605436" Jan 21 12:45:03 crc kubenswrapper[4745]: I0121 12:45:03.973900 4745 generic.go:334] "Generic (PLEG): container finished" podID="9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" containerID="86007659dfe6d1190deb90d3dfe022f4a63418919471bc71323496342a43b0ec" exitCode=0 Jan 21 12:45:03 crc kubenswrapper[4745]: I0121 12:45:03.973951 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" event={"ID":"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3","Type":"ContainerDied","Data":"86007659dfe6d1190deb90d3dfe022f4a63418919471bc71323496342a43b0ec"} Jan 21 12:45:03 crc kubenswrapper[4745]: I0121 12:45:03.992247 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:03 crc kubenswrapper[4745]: I0121 12:45:03.992312 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:05 crc kubenswrapper[4745]: I0121 12:45:05.064549 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tw2cw" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="registry-server" probeResult="failure" output=< Jan 21 12:45:05 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:45:05 crc kubenswrapper[4745]: > Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.085366 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.165437 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxv78\" (UniqueName: \"kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78\") pod \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.165670 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume\") pod \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.165710 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume\") pod \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\" (UID: \"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3\") " Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.167129 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume" (OuterVolumeSpecName: "config-volume") pod "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" (UID: "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.171998 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78" (OuterVolumeSpecName: "kube-api-access-zxv78") pod "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" (UID: "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3"). InnerVolumeSpecName "kube-api-access-zxv78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.172787 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" (UID: "9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.267906 4745 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.267952 4745 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:06 crc kubenswrapper[4745]: I0121 12:45:06.267963 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxv78\" (UniqueName: \"kubernetes.io/projected/9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3-kube-api-access-zxv78\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:07 crc kubenswrapper[4745]: I0121 12:45:07.007920 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" event={"ID":"9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3","Type":"ContainerDied","Data":"a1ca6e8f9611c2ad9f3c7564f30957d665627eaaf6402d79f74cb3682d306f79"} Jan 21 12:45:07 crc kubenswrapper[4745]: I0121 12:45:07.007985 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29483325-s4xvk" Jan 21 12:45:07 crc kubenswrapper[4745]: I0121 12:45:07.007962 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ca6e8f9611c2ad9f3c7564f30957d665627eaaf6402d79f74cb3682d306f79" Jan 21 12:45:07 crc kubenswrapper[4745]: I0121 12:45:07.179111 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84"] Jan 21 12:45:07 crc kubenswrapper[4745]: I0121 12:45:07.188121 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29483280-cnd84"] Jan 21 12:45:08 crc kubenswrapper[4745]: I0121 12:45:08.012975 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="673fc212-6fed-4d90-9b92-7d6e1c9fecf5" path="/var/lib/kubelet/pods/673fc212-6fed-4d90-9b92-7d6e1c9fecf5/volumes" Jan 21 12:45:10 crc kubenswrapper[4745]: I0121 12:45:10.003185 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:45:10 crc kubenswrapper[4745]: E0121 12:45:10.003686 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:45:11 crc kubenswrapper[4745]: I0121 12:45:11.755167 4745 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-bpmz2" podUID="976354ad-a346-409e-893a-d8edb62a6148" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 21 12:45:13 crc kubenswrapper[4745]: I0121 12:45:13.052120 4745 trace.go:236] Trace[827376688]: "Calculate volume metrics of nginx-conf for pod openshift-nmstate/nmstate-console-plugin-7754f76f8b-54v72" (21-Jan-2026 12:45:10.814) (total time: 2236ms): Jan 21 12:45:13 crc kubenswrapper[4745]: Trace[827376688]: [2.236949782s] [2.236949782s] END Jan 21 12:45:14 crc kubenswrapper[4745]: I0121 12:45:14.068429 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:14 crc kubenswrapper[4745]: I0121 12:45:14.129972 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:14 crc kubenswrapper[4745]: I0121 12:45:14.308964 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:45:15 crc kubenswrapper[4745]: I0121 12:45:15.103565 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tw2cw" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="registry-server" containerID="cri-o://54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7" gracePeriod=2 Jan 21 12:45:15 crc kubenswrapper[4745]: E0121 12:45:15.227977 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb96e7474_6a17_4be7_ba7e_fed224e36a9c.slice/crio-54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7.scope\": RecentStats: unable to find data in memory cache]" Jan 21 12:45:16 crc kubenswrapper[4745]: I0121 12:45:16.148157 4745 generic.go:334] "Generic (PLEG): container finished" podID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerID="54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7" exitCode=0 Jan 21 12:45:16 crc kubenswrapper[4745]: I0121 12:45:16.148569 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerDied","Data":"54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7"} Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.064984 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.126097 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crptk\" (UniqueName: \"kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk\") pod \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.126191 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities\") pod \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.126260 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content\") pod \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\" (UID: \"b96e7474-6a17-4be7-ba7e-fed224e36a9c\") " Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.127789 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities" (OuterVolumeSpecName: "utilities") pod "b96e7474-6a17-4be7-ba7e-fed224e36a9c" (UID: "b96e7474-6a17-4be7-ba7e-fed224e36a9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.153134 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk" (OuterVolumeSpecName: "kube-api-access-crptk") pod "b96e7474-6a17-4be7-ba7e-fed224e36a9c" (UID: "b96e7474-6a17-4be7-ba7e-fed224e36a9c"). InnerVolumeSpecName "kube-api-access-crptk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.182039 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tw2cw" event={"ID":"b96e7474-6a17-4be7-ba7e-fed224e36a9c","Type":"ContainerDied","Data":"07f727d46bf410b967f4799e14a72c1c2240b1450e40317f5a3fd8ba1e7a347e"} Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.182092 4745 scope.go:117] "RemoveContainer" containerID="54461afc49105b7cbb9d5c4bc9a5dee93b5edb5038a0ab3571038ae4cbceb4a7" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.182248 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tw2cw" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.229538 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.229563 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crptk\" (UniqueName: \"kubernetes.io/projected/b96e7474-6a17-4be7-ba7e-fed224e36a9c-kube-api-access-crptk\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.257396 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b96e7474-6a17-4be7-ba7e-fed224e36a9c" (UID: "b96e7474-6a17-4be7-ba7e-fed224e36a9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.266715 4745 scope.go:117] "RemoveContainer" containerID="ca4bc9f8f127ad81824af88bf7ad0bb36b7d9d9286198c25f3d824de29bb872c" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.318655 4745 scope.go:117] "RemoveContainer" containerID="a29ec2365a961cb654982fc384c07e72e51a2be554f95782bc5a858a8ed59e62" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.331077 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b96e7474-6a17-4be7-ba7e-fed224e36a9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.569581 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:45:17 crc kubenswrapper[4745]: I0121 12:45:17.716209 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tw2cw"] Jan 21 12:45:18 crc kubenswrapper[4745]: I0121 12:45:18.014258 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" path="/var/lib/kubelet/pods/b96e7474-6a17-4be7-ba7e-fed224e36a9c/volumes" Jan 21 12:45:24 crc kubenswrapper[4745]: I0121 12:45:24.000439 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:45:24 crc kubenswrapper[4745]: E0121 12:45:24.001312 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:45:35 crc kubenswrapper[4745]: I0121 12:45:35.000737 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:45:35 crc kubenswrapper[4745]: E0121 12:45:35.001976 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:45:47 crc kubenswrapper[4745]: I0121 12:45:46.999838 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:45:47 crc kubenswrapper[4745]: E0121 12:45:47.000526 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:45:59 crc kubenswrapper[4745]: I0121 12:45:59.685311 4745 scope.go:117] "RemoveContainer" containerID="37e17c2eeefc52b5a34e8ba5173f5bf9405ed9a3fbea58e672a721ed177de78c" Jan 21 12:46:00 crc kubenswrapper[4745]: I0121 12:46:00.001870 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:46:00 crc kubenswrapper[4745]: E0121 12:46:00.003037 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:46:11 crc kubenswrapper[4745]: I0121 12:46:11.000463 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:46:11 crc kubenswrapper[4745]: E0121 12:46:11.002616 4745 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-b8tqm_openshift-machine-config-operator(a8abb3db-dbf8-4568-a6dc-c88674d222b1)\"" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" Jan 21 12:46:26 crc kubenswrapper[4745]: I0121 12:46:26.011947 4745 scope.go:117] "RemoveContainer" containerID="cba30cf93ad4642755286331424ef847d1b137bd35a5b021120725beba2267d8" Jan 21 12:46:26 crc kubenswrapper[4745]: I0121 12:46:26.870596 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" event={"ID":"a8abb3db-dbf8-4568-a6dc-c88674d222b1","Type":"ContainerStarted","Data":"b4df2c2fbb7f9ef4206caf43b6b7cb8f6566f05bfb36617ae263b170f31c57db"} Jan 21 12:46:56 crc kubenswrapper[4745]: I0121 12:46:56.177364 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerID="24840b63b2691cf235b0530fa5478355cd5d0f3b0144cc6f40de8048b0909da4" exitCode=0 Jan 21 12:46:56 crc kubenswrapper[4745]: I0121 12:46:56.177472 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" event={"ID":"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522","Type":"ContainerDied","Data":"24840b63b2691cf235b0530fa5478355cd5d0f3b0144cc6f40de8048b0909da4"} Jan 21 12:46:56 crc kubenswrapper[4745]: I0121 12:46:56.178836 4745 scope.go:117] "RemoveContainer" containerID="24840b63b2691cf235b0530fa5478355cd5d0f3b0144cc6f40de8048b0909da4" Jan 21 12:46:57 crc kubenswrapper[4745]: I0121 12:46:57.087384 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb6b2_must-gather-g4bpz_8d80951c-31c7-4ee9-87fd-0d3f6ad0f522/gather/0.log" Jan 21 12:47:05 crc kubenswrapper[4745]: I0121 12:47:05.739072 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-hb6b2/must-gather-g4bpz"] Jan 21 12:47:05 crc kubenswrapper[4745]: I0121 12:47:05.739985 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="copy" containerID="cri-o://fd4172a33328a1d7937186c84f8454b8495c8c2b617ca91f10dc76b33a6501b8" gracePeriod=2 Jan 21 12:47:05 crc kubenswrapper[4745]: I0121 12:47:05.748314 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-hb6b2/must-gather-g4bpz"] Jan 21 12:47:06 crc kubenswrapper[4745]: I0121 12:47:06.301593 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb6b2_must-gather-g4bpz_8d80951c-31c7-4ee9-87fd-0d3f6ad0f522/copy/0.log" Jan 21 12:47:06 crc kubenswrapper[4745]: I0121 12:47:06.302250 4745 generic.go:334] "Generic (PLEG): container finished" podID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerID="fd4172a33328a1d7937186c84f8454b8495c8c2b617ca91f10dc76b33a6501b8" exitCode=143 Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.313720 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb6b2_must-gather-g4bpz_8d80951c-31c7-4ee9-87fd-0d3f6ad0f522/copy/0.log" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.315170 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="334c6a46aa3a1724ae13c50837aaf159fcea3bd1d443b80f9e74a3f9545a6345" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.386542 4745 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-hb6b2_must-gather-g4bpz_8d80951c-31c7-4ee9-87fd-0d3f6ad0f522/copy/0.log" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.387180 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.510474 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output\") pod \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.510847 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxs2d\" (UniqueName: \"kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d\") pod \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\" (UID: \"8d80951c-31c7-4ee9-87fd-0d3f6ad0f522\") " Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.518285 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d" (OuterVolumeSpecName: "kube-api-access-bxs2d") pod "8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" (UID: "8d80951c-31c7-4ee9-87fd-0d3f6ad0f522"). InnerVolumeSpecName "kube-api-access-bxs2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.614039 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxs2d\" (UniqueName: \"kubernetes.io/projected/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-kube-api-access-bxs2d\") on node \"crc\" DevicePath \"\"" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.749339 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" (UID: "8d80951c-31c7-4ee9-87fd-0d3f6ad0f522"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:47:07 crc kubenswrapper[4745]: I0121 12:47:07.817648 4745 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 21 12:47:08 crc kubenswrapper[4745]: I0121 12:47:08.011756 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" path="/var/lib/kubelet/pods/8d80951c-31c7-4ee9-87fd-0d3f6ad0f522/volumes" Jan 21 12:47:08 crc kubenswrapper[4745]: E0121 12:47:08.242723 4745 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d80951c_31c7_4ee9_87fd_0d3f6ad0f522.slice/crio-334c6a46aa3a1724ae13c50837aaf159fcea3bd1d443b80f9e74a3f9545a6345\": RecentStats: unable to find data in memory cache]" Jan 21 12:47:08 crc kubenswrapper[4745]: I0121 12:47:08.322765 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-hb6b2/must-gather-g4bpz" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.302232 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303249 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" containerName="collect-profiles" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303265 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" containerName="collect-profiles" Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303304 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="extract-content" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303312 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="extract-content" Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303328 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="extract-utilities" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303336 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="extract-utilities" Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303350 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="copy" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303358 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="copy" Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303371 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="registry-server" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303378 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="registry-server" Jan 21 12:47:41 crc kubenswrapper[4745]: E0121 12:47:41.303396 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="gather" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303404 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="gather" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303963 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="gather" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.303985 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d80951c-31c7-4ee9-87fd-0d3f6ad0f522" containerName="copy" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.304007 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b06fcbf-4922-4cd3-aeee-ac0f8d5883b3" containerName="collect-profiles" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.304031 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b96e7474-6a17-4be7-ba7e-fed224e36a9c" containerName="registry-server" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.305670 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.316159 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.457991 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.458146 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8nz8\" (UniqueName: \"kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.458310 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.560118 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.560775 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8nz8\" (UniqueName: \"kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.560769 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.561040 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.561615 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.597894 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8nz8\" (UniqueName: \"kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8\") pod \"certified-operators-gqlc6\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:41 crc kubenswrapper[4745]: I0121 12:47:41.626688 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:42 crc kubenswrapper[4745]: I0121 12:47:42.123324 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:42 crc kubenswrapper[4745]: I0121 12:47:42.635298 4745 generic.go:334] "Generic (PLEG): container finished" podID="3ba87b31-0438-492d-871c-6b51d73b7244" containerID="13340b3313fb63a1c1677ab1335023c8296eb4e9612727de021a9a92968dc431" exitCode=0 Jan 21 12:47:42 crc kubenswrapper[4745]: I0121 12:47:42.635685 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerDied","Data":"13340b3313fb63a1c1677ab1335023c8296eb4e9612727de021a9a92968dc431"} Jan 21 12:47:42 crc kubenswrapper[4745]: I0121 12:47:42.635719 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerStarted","Data":"41b14f4ba93480ea2caf97ad70e11dba15a053f9282efbf0cb60dcdd16659eeb"} Jan 21 12:47:43 crc kubenswrapper[4745]: I0121 12:47:43.646896 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerStarted","Data":"26cfba8cd7884f8a8f707cecd6860c739790fcb922c690782eb64d66d490a445"} Jan 21 12:47:45 crc kubenswrapper[4745]: I0121 12:47:45.667946 4745 generic.go:334] "Generic (PLEG): container finished" podID="3ba87b31-0438-492d-871c-6b51d73b7244" containerID="26cfba8cd7884f8a8f707cecd6860c739790fcb922c690782eb64d66d490a445" exitCode=0 Jan 21 12:47:45 crc kubenswrapper[4745]: I0121 12:47:45.668035 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerDied","Data":"26cfba8cd7884f8a8f707cecd6860c739790fcb922c690782eb64d66d490a445"} Jan 21 12:47:46 crc kubenswrapper[4745]: I0121 12:47:46.694452 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerStarted","Data":"638a6ff6508121500db8501bcbcb3681bf05ab825ae748f23f9a3193dc725e92"} Jan 21 12:47:46 crc kubenswrapper[4745]: I0121 12:47:46.719753 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gqlc6" podStartSLOduration=2.222656948 podStartE2EDuration="5.719728667s" podCreationTimestamp="2026-01-21 12:47:41 +0000 UTC" firstStartedPulling="2026-01-21 12:47:42.639127081 +0000 UTC m=+7847.099914679" lastFinishedPulling="2026-01-21 12:47:46.13619878 +0000 UTC m=+7850.596986398" observedRunningTime="2026-01-21 12:47:46.713813903 +0000 UTC m=+7851.174601531" watchObservedRunningTime="2026-01-21 12:47:46.719728667 +0000 UTC m=+7851.180516275" Jan 21 12:47:49 crc kubenswrapper[4745]: I0121 12:47:49.891728 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:47:49 crc kubenswrapper[4745]: I0121 12:47:49.895003 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:49 crc kubenswrapper[4745]: I0121 12:47:49.908291 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.077037 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hrv5\" (UniqueName: \"kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.077096 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.077236 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.179215 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hrv5\" (UniqueName: \"kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.179308 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.179343 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.180101 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.180264 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.202473 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hrv5\" (UniqueName: \"kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5\") pod \"redhat-operators-cl5z9\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.215564 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.701827 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:47:50 crc kubenswrapper[4745]: W0121 12:47:50.711011 4745 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5b81048_86dd_4bfd_a962_883c8010b1c0.slice/crio-23edac20ce7c19e069f8fb682ffcea904b25f7eae81bb88aa13be8d0f40658cb WatchSource:0}: Error finding container 23edac20ce7c19e069f8fb682ffcea904b25f7eae81bb88aa13be8d0f40658cb: Status 404 returned error can't find the container with id 23edac20ce7c19e069f8fb682ffcea904b25f7eae81bb88aa13be8d0f40658cb Jan 21 12:47:50 crc kubenswrapper[4745]: I0121 12:47:50.734780 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerStarted","Data":"23edac20ce7c19e069f8fb682ffcea904b25f7eae81bb88aa13be8d0f40658cb"} Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.627056 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.627427 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.678912 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.743974 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerID="f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5" exitCode=0 Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.745700 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerDied","Data":"f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5"} Jan 21 12:47:51 crc kubenswrapper[4745]: I0121 12:47:51.801432 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:52 crc kubenswrapper[4745]: I0121 12:47:52.483125 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:53 crc kubenswrapper[4745]: I0121 12:47:53.760481 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gqlc6" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="registry-server" containerID="cri-o://638a6ff6508121500db8501bcbcb3681bf05ab825ae748f23f9a3193dc725e92" gracePeriod=2 Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.785640 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerStarted","Data":"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e"} Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.792384 4745 generic.go:334] "Generic (PLEG): container finished" podID="3ba87b31-0438-492d-871c-6b51d73b7244" containerID="638a6ff6508121500db8501bcbcb3681bf05ab825ae748f23f9a3193dc725e92" exitCode=0 Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.792436 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerDied","Data":"638a6ff6508121500db8501bcbcb3681bf05ab825ae748f23f9a3193dc725e92"} Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.792467 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gqlc6" event={"ID":"3ba87b31-0438-492d-871c-6b51d73b7244","Type":"ContainerDied","Data":"41b14f4ba93480ea2caf97ad70e11dba15a053f9282efbf0cb60dcdd16659eeb"} Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.792482 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b14f4ba93480ea2caf97ad70e11dba15a053f9282efbf0cb60dcdd16659eeb" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.820178 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.877727 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content\") pod \"3ba87b31-0438-492d-871c-6b51d73b7244\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.878120 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities\") pod \"3ba87b31-0438-492d-871c-6b51d73b7244\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.878321 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8nz8\" (UniqueName: \"kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8\") pod \"3ba87b31-0438-492d-871c-6b51d73b7244\" (UID: \"3ba87b31-0438-492d-871c-6b51d73b7244\") " Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.880006 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities" (OuterVolumeSpecName: "utilities") pod "3ba87b31-0438-492d-871c-6b51d73b7244" (UID: "3ba87b31-0438-492d-871c-6b51d73b7244"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.884578 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8" (OuterVolumeSpecName: "kube-api-access-p8nz8") pod "3ba87b31-0438-492d-871c-6b51d73b7244" (UID: "3ba87b31-0438-492d-871c-6b51d73b7244"). InnerVolumeSpecName "kube-api-access-p8nz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.916205 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3ba87b31-0438-492d-871c-6b51d73b7244" (UID: "3ba87b31-0438-492d-871c-6b51d73b7244"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.982930 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8nz8\" (UniqueName: \"kubernetes.io/projected/3ba87b31-0438-492d-871c-6b51d73b7244-kube-api-access-p8nz8\") on node \"crc\" DevicePath \"\"" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.982974 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:47:54 crc kubenswrapper[4745]: I0121 12:47:54.982986 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3ba87b31-0438-492d-871c-6b51d73b7244-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:47:55 crc kubenswrapper[4745]: I0121 12:47:55.805511 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gqlc6" Jan 21 12:47:55 crc kubenswrapper[4745]: I0121 12:47:55.853438 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:55 crc kubenswrapper[4745]: I0121 12:47:55.865563 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gqlc6"] Jan 21 12:47:56 crc kubenswrapper[4745]: I0121 12:47:56.012663 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" path="/var/lib/kubelet/pods/3ba87b31-0438-492d-871c-6b51d73b7244/volumes" Jan 21 12:47:59 crc kubenswrapper[4745]: I0121 12:47:59.853883 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerID="f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e" exitCode=0 Jan 21 12:47:59 crc kubenswrapper[4745]: I0121 12:47:59.854011 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerDied","Data":"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e"} Jan 21 12:47:59 crc kubenswrapper[4745]: I0121 12:47:59.888367 4745 scope.go:117] "RemoveContainer" containerID="fd4172a33328a1d7937186c84f8454b8495c8c2b617ca91f10dc76b33a6501b8" Jan 21 12:47:59 crc kubenswrapper[4745]: I0121 12:47:59.914956 4745 scope.go:117] "RemoveContainer" containerID="24840b63b2691cf235b0530fa5478355cd5d0f3b0144cc6f40de8048b0909da4" Jan 21 12:48:03 crc kubenswrapper[4745]: I0121 12:48:03.932355 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerStarted","Data":"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b"} Jan 21 12:48:04 crc kubenswrapper[4745]: I0121 12:48:04.978870 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cl5z9" podStartSLOduration=4.400912057 podStartE2EDuration="15.978843676s" podCreationTimestamp="2026-01-21 12:47:49 +0000 UTC" firstStartedPulling="2026-01-21 12:47:51.747296448 +0000 UTC m=+7856.208084046" lastFinishedPulling="2026-01-21 12:48:03.325228067 +0000 UTC m=+7867.786015665" observedRunningTime="2026-01-21 12:48:04.973864908 +0000 UTC m=+7869.434652526" watchObservedRunningTime="2026-01-21 12:48:04.978843676 +0000 UTC m=+7869.439631284" Jan 21 12:48:10 crc kubenswrapper[4745]: I0121 12:48:10.215743 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:10 crc kubenswrapper[4745]: I0121 12:48:10.216307 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:11 crc kubenswrapper[4745]: I0121 12:48:11.265321 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cl5z9" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="registry-server" probeResult="failure" output=< Jan 21 12:48:11 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:48:11 crc kubenswrapper[4745]: > Jan 21 12:48:20 crc kubenswrapper[4745]: I0121 12:48:20.264587 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:20 crc kubenswrapper[4745]: I0121 12:48:20.313482 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:21 crc kubenswrapper[4745]: I0121 12:48:21.085434 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.120769 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cl5z9" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="registry-server" containerID="cri-o://9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b" gracePeriod=2 Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.668735 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.757142 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content\") pod \"b5b81048-86dd-4bfd-a962-883c8010b1c0\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.757785 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities\") pod \"b5b81048-86dd-4bfd-a962-883c8010b1c0\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.757886 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hrv5\" (UniqueName: \"kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5\") pod \"b5b81048-86dd-4bfd-a962-883c8010b1c0\" (UID: \"b5b81048-86dd-4bfd-a962-883c8010b1c0\") " Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.758472 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities" (OuterVolumeSpecName: "utilities") pod "b5b81048-86dd-4bfd-a962-883c8010b1c0" (UID: "b5b81048-86dd-4bfd-a962-883c8010b1c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.758758 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.779084 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5" (OuterVolumeSpecName: "kube-api-access-4hrv5") pod "b5b81048-86dd-4bfd-a962-883c8010b1c0" (UID: "b5b81048-86dd-4bfd-a962-883c8010b1c0"). InnerVolumeSpecName "kube-api-access-4hrv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.860799 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hrv5\" (UniqueName: \"kubernetes.io/projected/b5b81048-86dd-4bfd-a962-883c8010b1c0-kube-api-access-4hrv5\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.871404 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5b81048-86dd-4bfd-a962-883c8010b1c0" (UID: "b5b81048-86dd-4bfd-a962-883c8010b1c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:48:22 crc kubenswrapper[4745]: I0121 12:48:22.962099 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5b81048-86dd-4bfd-a962-883c8010b1c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.136576 4745 generic.go:334] "Generic (PLEG): container finished" podID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerID="9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b" exitCode=0 Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.136669 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerDied","Data":"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b"} Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.137400 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cl5z9" event={"ID":"b5b81048-86dd-4bfd-a962-883c8010b1c0","Type":"ContainerDied","Data":"23edac20ce7c19e069f8fb682ffcea904b25f7eae81bb88aa13be8d0f40658cb"} Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.137436 4745 scope.go:117] "RemoveContainer" containerID="9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.136667 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cl5z9" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.190243 4745 scope.go:117] "RemoveContainer" containerID="f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.205504 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.212105 4745 scope.go:117] "RemoveContainer" containerID="f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.217876 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cl5z9"] Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.262574 4745 scope.go:117] "RemoveContainer" containerID="9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b" Jan 21 12:48:23 crc kubenswrapper[4745]: E0121 12:48:23.263826 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b\": container with ID starting with 9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b not found: ID does not exist" containerID="9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.263942 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b"} err="failed to get container status \"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b\": rpc error: code = NotFound desc = could not find container \"9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b\": container with ID starting with 9153678611a300a371471f630ee29538b75b5e3018def78d2137d8b031ee682b not found: ID does not exist" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.263966 4745 scope.go:117] "RemoveContainer" containerID="f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e" Jan 21 12:48:23 crc kubenswrapper[4745]: E0121 12:48:23.264321 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e\": container with ID starting with f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e not found: ID does not exist" containerID="f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.264479 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e"} err="failed to get container status \"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e\": rpc error: code = NotFound desc = could not find container \"f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e\": container with ID starting with f04d8cc1de497bd12f27d2c3d71682d16d25fc65f563eca606cad08f0f8faf6e not found: ID does not exist" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.264628 4745 scope.go:117] "RemoveContainer" containerID="f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5" Jan 21 12:48:23 crc kubenswrapper[4745]: E0121 12:48:23.265055 4745 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5\": container with ID starting with f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5 not found: ID does not exist" containerID="f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5" Jan 21 12:48:23 crc kubenswrapper[4745]: I0121 12:48:23.265083 4745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5"} err="failed to get container status \"f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5\": rpc error: code = NotFound desc = could not find container \"f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5\": container with ID starting with f31b9634e880bfd0e1506865b0f02879f617be384a83433a92eb8b73d67b66d5 not found: ID does not exist" Jan 21 12:48:24 crc kubenswrapper[4745]: I0121 12:48:24.011506 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" path="/var/lib/kubelet/pods/b5b81048-86dd-4bfd-a962-883c8010b1c0/volumes" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.732626 4745 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.739522 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="extract-utilities" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.739821 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="extract-utilities" Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.739995 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="extract-content" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.740164 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="extract-content" Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.740327 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="extract-utilities" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.740454 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="extract-utilities" Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.740611 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="extract-content" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.740768 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="extract-content" Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.740909 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.741048 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: E0121 12:48:25.741174 4745 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.741304 4745 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.741890 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b81048-86dd-4bfd-a962-883c8010b1c0" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.742195 4745 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ba87b31-0438-492d-871c-6b51d73b7244" containerName="registry-server" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.745108 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.768084 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.824392 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.824718 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzddg\" (UniqueName: \"kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.824959 4745 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.925911 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.926432 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.926469 4745 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzddg\" (UniqueName: \"kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.926474 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.927104 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:25 crc kubenswrapper[4745]: I0121 12:48:25.948266 4745 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzddg\" (UniqueName: \"kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg\") pod \"redhat-marketplace-j8k9w\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:26 crc kubenswrapper[4745]: I0121 12:48:26.077032 4745 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:26 crc kubenswrapper[4745]: I0121 12:48:26.618691 4745 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:27 crc kubenswrapper[4745]: I0121 12:48:27.187858 4745 generic.go:334] "Generic (PLEG): container finished" podID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" containerID="dae89274147d9c645a79e180dfbba6170f3c8add004c75d2e6b9abebf66ae629" exitCode=0 Jan 21 12:48:27 crc kubenswrapper[4745]: I0121 12:48:27.187917 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerDied","Data":"dae89274147d9c645a79e180dfbba6170f3c8add004c75d2e6b9abebf66ae629"} Jan 21 12:48:27 crc kubenswrapper[4745]: I0121 12:48:27.188186 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerStarted","Data":"622e680e230995eb343e8a5d83d065d887ee9b5e132ba37d7595726ba25739af"} Jan 21 12:48:32 crc kubenswrapper[4745]: I0121 12:48:32.236725 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerStarted","Data":"88b1262e7854f8b4d8e90e2a7b531baace15554a5e123836ef976da7583c4b50"} Jan 21 12:48:33 crc kubenswrapper[4745]: I0121 12:48:33.246790 4745 generic.go:334] "Generic (PLEG): container finished" podID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" containerID="88b1262e7854f8b4d8e90e2a7b531baace15554a5e123836ef976da7583c4b50" exitCode=0 Jan 21 12:48:33 crc kubenswrapper[4745]: I0121 12:48:33.247591 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerDied","Data":"88b1262e7854f8b4d8e90e2a7b531baace15554a5e123836ef976da7583c4b50"} Jan 21 12:48:35 crc kubenswrapper[4745]: I0121 12:48:35.270223 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerStarted","Data":"50d1d4dadae014eb68ed9d1236167bb4ce71d9313645e1e03bbc490a906e970e"} Jan 21 12:48:35 crc kubenswrapper[4745]: I0121 12:48:35.297812 4745 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j8k9w" podStartSLOduration=2.439288143 podStartE2EDuration="10.297792372s" podCreationTimestamp="2026-01-21 12:48:25 +0000 UTC" firstStartedPulling="2026-01-21 12:48:27.191479944 +0000 UTC m=+7891.652267542" lastFinishedPulling="2026-01-21 12:48:35.049984173 +0000 UTC m=+7899.510771771" observedRunningTime="2026-01-21 12:48:35.288794903 +0000 UTC m=+7899.749582501" watchObservedRunningTime="2026-01-21 12:48:35.297792372 +0000 UTC m=+7899.758579970" Jan 21 12:48:36 crc kubenswrapper[4745]: I0121 12:48:36.077322 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:36 crc kubenswrapper[4745]: I0121 12:48:36.077373 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:37 crc kubenswrapper[4745]: I0121 12:48:37.124156 4745 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-j8k9w" podUID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" containerName="registry-server" probeResult="failure" output=< Jan 21 12:48:37 crc kubenswrapper[4745]: timeout: failed to connect service ":50051" within 1s Jan 21 12:48:37 crc kubenswrapper[4745]: > Jan 21 12:48:45 crc kubenswrapper[4745]: I0121 12:48:45.866771 4745 patch_prober.go:28] interesting pod/machine-config-daemon-b8tqm container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 21 12:48:45 crc kubenswrapper[4745]: I0121 12:48:45.867876 4745 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-b8tqm" podUID="a8abb3db-dbf8-4568-a6dc-c88674d222b1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 21 12:48:46 crc kubenswrapper[4745]: I0121 12:48:46.136263 4745 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:46 crc kubenswrapper[4745]: I0121 12:48:46.205854 4745 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:46 crc kubenswrapper[4745]: I0121 12:48:46.372148 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:47 crc kubenswrapper[4745]: I0121 12:48:47.406481 4745 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j8k9w" podUID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" containerName="registry-server" containerID="cri-o://50d1d4dadae014eb68ed9d1236167bb4ce71d9313645e1e03bbc490a906e970e" gracePeriod=2 Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.418024 4745 generic.go:334] "Generic (PLEG): container finished" podID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" containerID="50d1d4dadae014eb68ed9d1236167bb4ce71d9313645e1e03bbc490a906e970e" exitCode=0 Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.418335 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerDied","Data":"50d1d4dadae014eb68ed9d1236167bb4ce71d9313645e1e03bbc490a906e970e"} Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.418367 4745 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j8k9w" event={"ID":"0c2479a4-d5b3-4408-b4d7-313d4c0354a2","Type":"ContainerDied","Data":"622e680e230995eb343e8a5d83d065d887ee9b5e132ba37d7595726ba25739af"} Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.418381 4745 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622e680e230995eb343e8a5d83d065d887ee9b5e132ba37d7595726ba25739af" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.423753 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.455173 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities\") pod \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.455296 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content\") pod \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.455426 4745 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzddg\" (UniqueName: \"kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg\") pod \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\" (UID: \"0c2479a4-d5b3-4408-b4d7-313d4c0354a2\") " Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.456124 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities" (OuterVolumeSpecName: "utilities") pod "0c2479a4-d5b3-4408-b4d7-313d4c0354a2" (UID: "0c2479a4-d5b3-4408-b4d7-313d4c0354a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.456797 4745 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.484050 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c2479a4-d5b3-4408-b4d7-313d4c0354a2" (UID: "0c2479a4-d5b3-4408-b4d7-313d4c0354a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.484793 4745 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg" (OuterVolumeSpecName: "kube-api-access-rzddg") pod "0c2479a4-d5b3-4408-b4d7-313d4c0354a2" (UID: "0c2479a4-d5b3-4408-b4d7-313d4c0354a2"). InnerVolumeSpecName "kube-api-access-rzddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.560012 4745 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzddg\" (UniqueName: \"kubernetes.io/projected/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-kube-api-access-rzddg\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:48 crc kubenswrapper[4745]: I0121 12:48:48.560053 4745 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c2479a4-d5b3-4408-b4d7-313d4c0354a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 21 12:48:49 crc kubenswrapper[4745]: I0121 12:48:49.439851 4745 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j8k9w" Jan 21 12:48:49 crc kubenswrapper[4745]: I0121 12:48:49.502876 4745 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:49 crc kubenswrapper[4745]: I0121 12:48:49.511784 4745 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j8k9w"] Jan 21 12:48:50 crc kubenswrapper[4745]: I0121 12:48:50.012230 4745 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c2479a4-d5b3-4408-b4d7-313d4c0354a2" path="/var/lib/kubelet/pods/0c2479a4-d5b3-4408-b4d7-313d4c0354a2/volumes"